00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2031 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3296 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.108 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.108 The recommended git tool is: git 00:00:00.109 using credential 00000000-0000-0000-0000-000000000002 00:00:00.110 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.132 Fetching changes from the remote Git repository 00:00:00.133 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.157 Using shallow fetch with depth 1 00:00:00.157 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.157 > git --version # timeout=10 00:00:00.181 > git --version # 'git version 2.39.2' 00:00:00.181 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.200 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.200 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.579 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.590 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.602 Checking out Revision 4313f32deecbb7108199ebd1913b403a3005dece (FETCH_HEAD) 00:00:05.602 > git config core.sparsecheckout # timeout=10 00:00:05.614 > git read-tree -mu HEAD # timeout=10 00:00:05.630 > git checkout -f 4313f32deecbb7108199ebd1913b403a3005dece # timeout=5 00:00:05.647 Commit message: "packer: Add bios builder" 00:00:05.647 > git rev-list --no-walk 4313f32deecbb7108199ebd1913b403a3005dece # timeout=10 00:00:05.746 [Pipeline] Start of Pipeline 00:00:05.756 [Pipeline] library 00:00:05.757 Loading library shm_lib@master 00:00:05.757 Library shm_lib@master is cached. Copying from home. 00:00:05.771 [Pipeline] node 00:00:05.785 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.786 [Pipeline] { 00:00:05.794 [Pipeline] catchError 00:00:05.795 [Pipeline] { 00:00:05.804 [Pipeline] wrap 00:00:05.811 [Pipeline] { 00:00:05.817 [Pipeline] stage 00:00:05.818 [Pipeline] { (Prologue) 00:00:06.003 [Pipeline] sh 00:00:06.285 + logger -p user.info -t JENKINS-CI 00:00:06.301 [Pipeline] echo 00:00:06.303 Node: GP11 00:00:06.309 [Pipeline] sh 00:00:06.610 [Pipeline] setCustomBuildProperty 00:00:06.620 [Pipeline] echo 00:00:06.621 Cleanup processes 00:00:06.626 [Pipeline] sh 00:00:06.912 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.912 1589778 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.923 [Pipeline] sh 00:00:07.206 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.206 ++ grep -v 'sudo pgrep' 00:00:07.206 ++ awk '{print $1}' 00:00:07.206 + sudo kill -9 00:00:07.206 + true 00:00:07.220 [Pipeline] cleanWs 00:00:07.229 [WS-CLEANUP] Deleting project workspace... 00:00:07.229 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.235 [WS-CLEANUP] done 00:00:07.239 [Pipeline] setCustomBuildProperty 00:00:07.251 [Pipeline] sh 00:00:07.530 + sudo git config --global --replace-all safe.directory '*' 00:00:07.623 [Pipeline] httpRequest 00:00:07.658 [Pipeline] echo 00:00:07.659 Sorcerer 10.211.164.101 is alive 00:00:07.667 [Pipeline] httpRequest 00:00:07.672 HttpMethod: GET 00:00:07.672 URL: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:07.673 Sending request to url: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:07.687 Response Code: HTTP/1.1 200 OK 00:00:07.688 Success: Status code 200 is in the accepted range: 200,404 00:00:07.688 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:11.273 [Pipeline] sh 00:00:11.557 + tar --no-same-owner -xf jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:11.573 [Pipeline] httpRequest 00:00:11.603 [Pipeline] echo 00:00:11.605 Sorcerer 10.211.164.101 is alive 00:00:11.613 [Pipeline] httpRequest 00:00:11.618 HttpMethod: GET 00:00:11.619 URL: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:11.619 Sending request to url: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:11.644 Response Code: HTTP/1.1 200 OK 00:00:11.645 Success: Status code 200 is in the accepted range: 200,404 00:00:11.645 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:01:24.620 [Pipeline] sh 00:01:24.908 + tar --no-same-owner -xf spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:01:28.212 [Pipeline] sh 00:01:28.500 + git -C spdk log --oneline -n5 00:01:28.500 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:01:28.500 fc2398dfa raid: clear base bdev configure_cb after executing 00:01:28.500 5558f3f50 raid: complete bdev_raid_create after sb is written 00:01:28.500 d005e023b raid: fix empty slot not updated in sb after resize 00:01:28.500 f41dbc235 nvme: always specify CC_CSS_NVM when CAP_CSS_IOCS is not set 00:01:28.520 [Pipeline] withCredentials 00:01:28.532 > git --version # timeout=10 00:01:28.544 > git --version # 'git version 2.39.2' 00:01:28.564 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:28.567 [Pipeline] { 00:01:28.578 [Pipeline] retry 00:01:28.580 [Pipeline] { 00:01:28.599 [Pipeline] sh 00:01:28.888 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:34.189 [Pipeline] } 00:01:34.211 [Pipeline] // retry 00:01:34.216 [Pipeline] } 00:01:34.236 [Pipeline] // withCredentials 00:01:34.247 [Pipeline] httpRequest 00:01:34.275 [Pipeline] echo 00:01:34.277 Sorcerer 10.211.164.101 is alive 00:01:34.285 [Pipeline] httpRequest 00:01:34.291 HttpMethod: GET 00:01:34.291 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:34.292 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:34.295 Response Code: HTTP/1.1 200 OK 00:01:34.296 Success: Status code 200 is in the accepted range: 200,404 00:01:34.296 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:36.209 [Pipeline] sh 00:01:36.495 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:38.413 [Pipeline] sh 00:01:38.700 + git -C dpdk log --oneline -n5 00:01:38.700 caf0f5d395 version: 22.11.4 00:01:38.700 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:38.700 dc9c799c7d vhost: fix missing spinlock unlock 00:01:38.700 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:38.700 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:38.712 [Pipeline] } 00:01:38.729 [Pipeline] // stage 00:01:38.739 [Pipeline] stage 00:01:38.742 [Pipeline] { (Prepare) 00:01:38.764 [Pipeline] writeFile 00:01:38.782 [Pipeline] sh 00:01:39.068 + logger -p user.info -t JENKINS-CI 00:01:39.082 [Pipeline] sh 00:01:39.367 + logger -p user.info -t JENKINS-CI 00:01:39.380 [Pipeline] sh 00:01:39.665 + cat autorun-spdk.conf 00:01:39.665 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:39.665 SPDK_TEST_NVMF=1 00:01:39.665 SPDK_TEST_NVME_CLI=1 00:01:39.665 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:39.665 SPDK_TEST_NVMF_NICS=e810 00:01:39.665 SPDK_TEST_VFIOUSER=1 00:01:39.665 SPDK_RUN_UBSAN=1 00:01:39.665 NET_TYPE=phy 00:01:39.665 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:39.665 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:39.673 RUN_NIGHTLY=1 00:01:39.678 [Pipeline] readFile 00:01:39.704 [Pipeline] withEnv 00:01:39.706 [Pipeline] { 00:01:39.721 [Pipeline] sh 00:01:40.007 + set -ex 00:01:40.007 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:40.007 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:40.007 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:40.007 ++ SPDK_TEST_NVMF=1 00:01:40.007 ++ SPDK_TEST_NVME_CLI=1 00:01:40.007 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:40.007 ++ SPDK_TEST_NVMF_NICS=e810 00:01:40.007 ++ SPDK_TEST_VFIOUSER=1 00:01:40.007 ++ SPDK_RUN_UBSAN=1 00:01:40.007 ++ NET_TYPE=phy 00:01:40.007 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:40.007 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:40.007 ++ RUN_NIGHTLY=1 00:01:40.007 + case $SPDK_TEST_NVMF_NICS in 00:01:40.007 + DRIVERS=ice 00:01:40.007 + [[ tcp == \r\d\m\a ]] 00:01:40.007 + [[ -n ice ]] 00:01:40.007 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:40.007 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:40.007 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:40.007 rmmod: ERROR: Module irdma is not currently loaded 00:01:40.007 rmmod: ERROR: Module i40iw is not currently loaded 00:01:40.007 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:40.007 + true 00:01:40.007 + for D in $DRIVERS 00:01:40.007 + sudo modprobe ice 00:01:40.007 + exit 0 00:01:40.017 [Pipeline] } 00:01:40.036 [Pipeline] // withEnv 00:01:40.042 [Pipeline] } 00:01:40.059 [Pipeline] // stage 00:01:40.070 [Pipeline] catchError 00:01:40.072 [Pipeline] { 00:01:40.088 [Pipeline] timeout 00:01:40.088 Timeout set to expire in 50 min 00:01:40.090 [Pipeline] { 00:01:40.106 [Pipeline] stage 00:01:40.108 [Pipeline] { (Tests) 00:01:40.124 [Pipeline] sh 00:01:40.409 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:40.409 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:40.409 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:40.409 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:40.409 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:40.409 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:40.409 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:40.409 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:40.409 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:40.409 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:40.409 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:40.409 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:40.409 + source /etc/os-release 00:01:40.409 ++ NAME='Fedora Linux' 00:01:40.409 ++ VERSION='38 (Cloud Edition)' 00:01:40.409 ++ ID=fedora 00:01:40.409 ++ VERSION_ID=38 00:01:40.409 ++ VERSION_CODENAME= 00:01:40.409 ++ PLATFORM_ID=platform:f38 00:01:40.409 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:40.409 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:40.409 ++ LOGO=fedora-logo-icon 00:01:40.409 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:40.409 ++ HOME_URL=https://fedoraproject.org/ 00:01:40.409 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:40.409 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:40.409 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:40.409 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:40.409 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:40.409 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:40.409 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:40.409 ++ SUPPORT_END=2024-05-14 00:01:40.409 ++ VARIANT='Cloud Edition' 00:01:40.409 ++ VARIANT_ID=cloud 00:01:40.409 + uname -a 00:01:40.409 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:40.409 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:41.344 Hugepages 00:01:41.344 node hugesize free / total 00:01:41.344 node0 1048576kB 0 / 0 00:01:41.344 node0 2048kB 0 / 0 00:01:41.344 node1 1048576kB 0 / 0 00:01:41.344 node1 2048kB 0 / 0 00:01:41.344 00:01:41.344 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:41.344 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:41.344 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:41.344 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:41.344 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:41.344 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:41.344 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:41.344 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:41.344 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:41.344 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:41.344 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:41.344 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:41.344 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:41.344 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:41.344 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:41.344 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:41.344 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:41.603 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:41.603 + rm -f /tmp/spdk-ld-path 00:01:41.603 + source autorun-spdk.conf 00:01:41.603 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.603 ++ SPDK_TEST_NVMF=1 00:01:41.603 ++ SPDK_TEST_NVME_CLI=1 00:01:41.603 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:41.603 ++ SPDK_TEST_NVMF_NICS=e810 00:01:41.603 ++ SPDK_TEST_VFIOUSER=1 00:01:41.603 ++ SPDK_RUN_UBSAN=1 00:01:41.603 ++ NET_TYPE=phy 00:01:41.603 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:41.603 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:41.603 ++ RUN_NIGHTLY=1 00:01:41.603 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:41.603 + [[ -n '' ]] 00:01:41.603 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:41.603 + for M in /var/spdk/build-*-manifest.txt 00:01:41.603 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:41.603 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:41.603 + for M in /var/spdk/build-*-manifest.txt 00:01:41.603 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:41.603 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:41.603 ++ uname 00:01:41.603 + [[ Linux == \L\i\n\u\x ]] 00:01:41.603 + sudo dmesg -T 00:01:41.603 + sudo dmesg --clear 00:01:41.603 + dmesg_pid=1590488 00:01:41.603 + [[ Fedora Linux == FreeBSD ]] 00:01:41.603 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:41.603 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:41.603 + sudo dmesg -Tw 00:01:41.603 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:41.603 + [[ -x /usr/src/fio-static/fio ]] 00:01:41.603 + export FIO_BIN=/usr/src/fio-static/fio 00:01:41.603 + FIO_BIN=/usr/src/fio-static/fio 00:01:41.603 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:41.603 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:41.603 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:41.603 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:41.603 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:41.603 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:41.603 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:41.603 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:41.603 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:41.603 Test configuration: 00:01:41.603 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.603 SPDK_TEST_NVMF=1 00:01:41.603 SPDK_TEST_NVME_CLI=1 00:01:41.603 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:41.603 SPDK_TEST_NVMF_NICS=e810 00:01:41.603 SPDK_TEST_VFIOUSER=1 00:01:41.603 SPDK_RUN_UBSAN=1 00:01:41.603 NET_TYPE=phy 00:01:41.603 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:41.603 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:41.603 RUN_NIGHTLY=1 00:44:11 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:41.603 00:44:11 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:41.603 00:44:11 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:41.603 00:44:11 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:41.603 00:44:11 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.603 00:44:11 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.603 00:44:11 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.603 00:44:11 -- paths/export.sh@5 -- $ export PATH 00:01:41.603 00:44:11 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.603 00:44:11 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:41.603 00:44:11 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:41.603 00:44:11 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721947451.XXXXXX 00:01:41.603 00:44:11 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721947451.XbtwC9 00:01:41.603 00:44:11 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:41.603 00:44:11 -- common/autobuild_common.sh@453 -- $ '[' -n v22.11.4 ']' 00:01:41.603 00:44:11 -- common/autobuild_common.sh@454 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:41.603 00:44:11 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:41.603 00:44:11 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:41.603 00:44:11 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:41.603 00:44:11 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:41.603 00:44:11 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:41.603 00:44:11 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.603 00:44:11 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:41.603 00:44:11 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:41.603 00:44:11 -- pm/common@17 -- $ local monitor 00:01:41.603 00:44:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.603 00:44:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.603 00:44:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.603 00:44:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.603 00:44:11 -- pm/common@21 -- $ date +%s 00:01:41.603 00:44:11 -- pm/common@21 -- $ date +%s 00:01:41.603 00:44:11 -- pm/common@25 -- $ sleep 1 00:01:41.603 00:44:11 -- pm/common@21 -- $ date +%s 00:01:41.603 00:44:11 -- pm/common@21 -- $ date +%s 00:01:41.603 00:44:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721947451 00:01:41.603 00:44:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721947451 00:01:41.603 00:44:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721947451 00:01:41.604 00:44:11 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721947451 00:01:41.604 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721947451_collect-vmstat.pm.log 00:01:41.604 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721947451_collect-cpu-load.pm.log 00:01:41.604 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721947451_collect-cpu-temp.pm.log 00:01:41.604 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721947451_collect-bmc-pm.bmc.pm.log 00:01:42.988 00:44:12 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:42.988 00:44:12 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:42.988 00:44:12 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:42.988 00:44:12 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:42.988 00:44:12 -- spdk/autobuild.sh@16 -- $ date -u 00:01:42.988 Thu Jul 25 10:44:12 PM UTC 2024 00:01:42.988 00:44:12 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:42.988 v24.09-pre-321-g704257090 00:01:42.988 00:44:12 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:42.988 00:44:12 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:42.988 00:44:12 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:42.988 00:44:12 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:42.988 00:44:12 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:42.988 00:44:12 -- common/autotest_common.sh@10 -- $ set +x 00:01:42.988 ************************************ 00:01:42.988 START TEST ubsan 00:01:42.988 ************************************ 00:01:42.988 00:44:13 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:42.988 using ubsan 00:01:42.988 00:01:42.988 real 0m0.000s 00:01:42.988 user 0m0.000s 00:01:42.988 sys 0m0.000s 00:01:42.988 00:44:13 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:42.989 00:44:13 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:42.989 ************************************ 00:01:42.989 END TEST ubsan 00:01:42.989 ************************************ 00:01:42.989 00:44:13 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:42.989 00:44:13 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:42.989 00:44:13 -- common/autobuild_common.sh@439 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:42.989 00:44:13 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:01:42.989 00:44:13 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:42.989 00:44:13 -- common/autotest_common.sh@10 -- $ set +x 00:01:42.989 ************************************ 00:01:42.989 START TEST build_native_dpdk 00:01:42.989 ************************************ 00:01:42.989 00:44:13 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:42.989 caf0f5d395 version: 22.11.4 00:01:42.989 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:42.989 dc9c799c7d vhost: fix missing spinlock unlock 00:01:42.989 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:42.989 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:42.989 patching file config/rte_config.h 00:01:42.989 Hunk #1 succeeded at 60 (offset 1 line). 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:01:42.989 00:44:13 build_native_dpdk -- scripts/common.sh@365 -- $ return 0 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:42.989 patching file lib/pcapng/rte_pcapng.c 00:01:42.989 Hunk #1 succeeded at 110 (offset -18 lines). 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:42.989 00:44:13 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:47.183 The Meson build system 00:01:47.183 Version: 1.3.1 00:01:47.183 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:47.183 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:47.183 Build type: native build 00:01:47.183 Program cat found: YES (/usr/bin/cat) 00:01:47.183 Project name: DPDK 00:01:47.183 Project version: 22.11.4 00:01:47.183 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:47.183 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:47.183 Host machine cpu family: x86_64 00:01:47.183 Host machine cpu: x86_64 00:01:47.183 Message: ## Building in Developer Mode ## 00:01:47.183 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:47.183 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:47.183 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:47.183 Program objdump found: YES (/usr/bin/objdump) 00:01:47.184 Program python3 found: YES (/usr/bin/python3) 00:01:47.184 Program cat found: YES (/usr/bin/cat) 00:01:47.184 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:47.184 Checking for size of "void *" : 8 00:01:47.184 Checking for size of "void *" : 8 (cached) 00:01:47.184 Library m found: YES 00:01:47.184 Library numa found: YES 00:01:47.184 Has header "numaif.h" : YES 00:01:47.184 Library fdt found: NO 00:01:47.184 Library execinfo found: NO 00:01:47.184 Has header "execinfo.h" : YES 00:01:47.184 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:47.184 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:47.184 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:47.184 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:47.184 Run-time dependency openssl found: YES 3.0.9 00:01:47.184 Run-time dependency libpcap found: YES 1.10.4 00:01:47.184 Has header "pcap.h" with dependency libpcap: YES 00:01:47.184 Compiler for C supports arguments -Wcast-qual: YES 00:01:47.184 Compiler for C supports arguments -Wdeprecated: YES 00:01:47.184 Compiler for C supports arguments -Wformat: YES 00:01:47.184 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:47.184 Compiler for C supports arguments -Wformat-security: NO 00:01:47.184 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:47.184 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:47.184 Compiler for C supports arguments -Wnested-externs: YES 00:01:47.184 Compiler for C supports arguments -Wold-style-definition: YES 00:01:47.184 Compiler for C supports arguments -Wpointer-arith: YES 00:01:47.184 Compiler for C supports arguments -Wsign-compare: YES 00:01:47.184 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:47.184 Compiler for C supports arguments -Wundef: YES 00:01:47.184 Compiler for C supports arguments -Wwrite-strings: YES 00:01:47.184 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:47.184 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:47.184 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:47.184 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:47.184 Compiler for C supports arguments -mavx512f: YES 00:01:47.184 Checking if "AVX512 checking" compiles: YES 00:01:47.184 Fetching value of define "__SSE4_2__" : 1 00:01:47.184 Fetching value of define "__AES__" : 1 00:01:47.184 Fetching value of define "__AVX__" : 1 00:01:47.184 Fetching value of define "__AVX2__" : (undefined) 00:01:47.184 Fetching value of define "__AVX512BW__" : (undefined) 00:01:47.184 Fetching value of define "__AVX512CD__" : (undefined) 00:01:47.184 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:47.184 Fetching value of define "__AVX512F__" : (undefined) 00:01:47.184 Fetching value of define "__AVX512VL__" : (undefined) 00:01:47.184 Fetching value of define "__PCLMUL__" : 1 00:01:47.184 Fetching value of define "__RDRND__" : 1 00:01:47.184 Fetching value of define "__RDSEED__" : (undefined) 00:01:47.184 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:47.184 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:47.184 Message: lib/kvargs: Defining dependency "kvargs" 00:01:47.184 Message: lib/telemetry: Defining dependency "telemetry" 00:01:47.184 Checking for function "getentropy" : YES 00:01:47.184 Message: lib/eal: Defining dependency "eal" 00:01:47.184 Message: lib/ring: Defining dependency "ring" 00:01:47.184 Message: lib/rcu: Defining dependency "rcu" 00:01:47.184 Message: lib/mempool: Defining dependency "mempool" 00:01:47.184 Message: lib/mbuf: Defining dependency "mbuf" 00:01:47.184 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:47.184 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:47.184 Compiler for C supports arguments -mpclmul: YES 00:01:47.184 Compiler for C supports arguments -maes: YES 00:01:47.184 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:47.184 Compiler for C supports arguments -mavx512bw: YES 00:01:47.184 Compiler for C supports arguments -mavx512dq: YES 00:01:47.184 Compiler for C supports arguments -mavx512vl: YES 00:01:47.184 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:47.184 Compiler for C supports arguments -mavx2: YES 00:01:47.184 Compiler for C supports arguments -mavx: YES 00:01:47.184 Message: lib/net: Defining dependency "net" 00:01:47.184 Message: lib/meter: Defining dependency "meter" 00:01:47.184 Message: lib/ethdev: Defining dependency "ethdev" 00:01:47.184 Message: lib/pci: Defining dependency "pci" 00:01:47.184 Message: lib/cmdline: Defining dependency "cmdline" 00:01:47.184 Message: lib/metrics: Defining dependency "metrics" 00:01:47.184 Message: lib/hash: Defining dependency "hash" 00:01:47.184 Message: lib/timer: Defining dependency "timer" 00:01:47.184 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:47.184 Compiler for C supports arguments -mavx2: YES (cached) 00:01:47.184 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:47.184 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:47.184 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:47.184 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:47.184 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:47.184 Message: lib/acl: Defining dependency "acl" 00:01:47.184 Message: lib/bbdev: Defining dependency "bbdev" 00:01:47.184 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:47.184 Run-time dependency libelf found: YES 0.190 00:01:47.184 Message: lib/bpf: Defining dependency "bpf" 00:01:47.184 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:47.184 Message: lib/compressdev: Defining dependency "compressdev" 00:01:47.184 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:47.184 Message: lib/distributor: Defining dependency "distributor" 00:01:47.184 Message: lib/efd: Defining dependency "efd" 00:01:47.184 Message: lib/eventdev: Defining dependency "eventdev" 00:01:47.184 Message: lib/gpudev: Defining dependency "gpudev" 00:01:47.184 Message: lib/gro: Defining dependency "gro" 00:01:47.184 Message: lib/gso: Defining dependency "gso" 00:01:47.184 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:47.184 Message: lib/jobstats: Defining dependency "jobstats" 00:01:47.184 Message: lib/latencystats: Defining dependency "latencystats" 00:01:47.184 Message: lib/lpm: Defining dependency "lpm" 00:01:47.184 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:47.184 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:47.184 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:47.184 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:47.184 Message: lib/member: Defining dependency "member" 00:01:47.184 Message: lib/pcapng: Defining dependency "pcapng" 00:01:47.184 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:47.184 Message: lib/power: Defining dependency "power" 00:01:47.184 Message: lib/rawdev: Defining dependency "rawdev" 00:01:47.184 Message: lib/regexdev: Defining dependency "regexdev" 00:01:47.184 Message: lib/dmadev: Defining dependency "dmadev" 00:01:47.184 Message: lib/rib: Defining dependency "rib" 00:01:47.184 Message: lib/reorder: Defining dependency "reorder" 00:01:47.184 Message: lib/sched: Defining dependency "sched" 00:01:47.184 Message: lib/security: Defining dependency "security" 00:01:47.184 Message: lib/stack: Defining dependency "stack" 00:01:47.184 Has header "linux/userfaultfd.h" : YES 00:01:47.184 Message: lib/vhost: Defining dependency "vhost" 00:01:47.184 Message: lib/ipsec: Defining dependency "ipsec" 00:01:47.184 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:47.184 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:47.184 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:47.184 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:47.184 Message: lib/fib: Defining dependency "fib" 00:01:47.184 Message: lib/port: Defining dependency "port" 00:01:47.184 Message: lib/pdump: Defining dependency "pdump" 00:01:47.184 Message: lib/table: Defining dependency "table" 00:01:47.184 Message: lib/pipeline: Defining dependency "pipeline" 00:01:47.184 Message: lib/graph: Defining dependency "graph" 00:01:47.184 Message: lib/node: Defining dependency "node" 00:01:47.184 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:47.184 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:47.184 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:47.184 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:47.184 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:47.184 Compiler for C supports arguments -Wno-unused-value: YES 00:01:48.129 Compiler for C supports arguments -Wno-format: YES 00:01:48.129 Compiler for C supports arguments -Wno-format-security: YES 00:01:48.129 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:48.129 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:48.129 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:48.129 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:48.129 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:48.129 Compiler for C supports arguments -mavx2: YES (cached) 00:01:48.129 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:48.129 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:48.129 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:48.129 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:48.129 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:48.129 Program doxygen found: YES (/usr/bin/doxygen) 00:01:48.129 Configuring doxy-api.conf using configuration 00:01:48.129 Program sphinx-build found: NO 00:01:48.129 Configuring rte_build_config.h using configuration 00:01:48.129 Message: 00:01:48.129 ================= 00:01:48.129 Applications Enabled 00:01:48.129 ================= 00:01:48.129 00:01:48.129 apps: 00:01:48.129 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:48.129 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:48.129 test-security-perf, 00:01:48.129 00:01:48.129 Message: 00:01:48.129 ================= 00:01:48.129 Libraries Enabled 00:01:48.129 ================= 00:01:48.129 00:01:48.129 libs: 00:01:48.129 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:48.129 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:48.129 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:48.129 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:48.129 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:48.129 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:48.129 table, pipeline, graph, node, 00:01:48.129 00:01:48.129 Message: 00:01:48.129 =============== 00:01:48.129 Drivers Enabled 00:01:48.129 =============== 00:01:48.129 00:01:48.129 common: 00:01:48.129 00:01:48.129 bus: 00:01:48.129 pci, vdev, 00:01:48.129 mempool: 00:01:48.129 ring, 00:01:48.129 dma: 00:01:48.129 00:01:48.129 net: 00:01:48.129 i40e, 00:01:48.129 raw: 00:01:48.129 00:01:48.129 crypto: 00:01:48.129 00:01:48.129 compress: 00:01:48.129 00:01:48.129 regex: 00:01:48.129 00:01:48.129 vdpa: 00:01:48.129 00:01:48.129 event: 00:01:48.129 00:01:48.129 baseband: 00:01:48.129 00:01:48.129 gpu: 00:01:48.129 00:01:48.129 00:01:48.129 Message: 00:01:48.129 ================= 00:01:48.129 Content Skipped 00:01:48.129 ================= 00:01:48.129 00:01:48.129 apps: 00:01:48.129 00:01:48.129 libs: 00:01:48.129 kni: explicitly disabled via build config (deprecated lib) 00:01:48.129 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:48.129 00:01:48.129 drivers: 00:01:48.129 common/cpt: not in enabled drivers build config 00:01:48.129 common/dpaax: not in enabled drivers build config 00:01:48.129 common/iavf: not in enabled drivers build config 00:01:48.129 common/idpf: not in enabled drivers build config 00:01:48.130 common/mvep: not in enabled drivers build config 00:01:48.130 common/octeontx: not in enabled drivers build config 00:01:48.130 bus/auxiliary: not in enabled drivers build config 00:01:48.130 bus/dpaa: not in enabled drivers build config 00:01:48.130 bus/fslmc: not in enabled drivers build config 00:01:48.130 bus/ifpga: not in enabled drivers build config 00:01:48.130 bus/vmbus: not in enabled drivers build config 00:01:48.130 common/cnxk: not in enabled drivers build config 00:01:48.130 common/mlx5: not in enabled drivers build config 00:01:48.130 common/qat: not in enabled drivers build config 00:01:48.130 common/sfc_efx: not in enabled drivers build config 00:01:48.130 mempool/bucket: not in enabled drivers build config 00:01:48.130 mempool/cnxk: not in enabled drivers build config 00:01:48.130 mempool/dpaa: not in enabled drivers build config 00:01:48.130 mempool/dpaa2: not in enabled drivers build config 00:01:48.130 mempool/octeontx: not in enabled drivers build config 00:01:48.130 mempool/stack: not in enabled drivers build config 00:01:48.130 dma/cnxk: not in enabled drivers build config 00:01:48.130 dma/dpaa: not in enabled drivers build config 00:01:48.130 dma/dpaa2: not in enabled drivers build config 00:01:48.130 dma/hisilicon: not in enabled drivers build config 00:01:48.130 dma/idxd: not in enabled drivers build config 00:01:48.130 dma/ioat: not in enabled drivers build config 00:01:48.130 dma/skeleton: not in enabled drivers build config 00:01:48.130 net/af_packet: not in enabled drivers build config 00:01:48.130 net/af_xdp: not in enabled drivers build config 00:01:48.130 net/ark: not in enabled drivers build config 00:01:48.130 net/atlantic: not in enabled drivers build config 00:01:48.130 net/avp: not in enabled drivers build config 00:01:48.130 net/axgbe: not in enabled drivers build config 00:01:48.130 net/bnx2x: not in enabled drivers build config 00:01:48.130 net/bnxt: not in enabled drivers build config 00:01:48.130 net/bonding: not in enabled drivers build config 00:01:48.130 net/cnxk: not in enabled drivers build config 00:01:48.130 net/cxgbe: not in enabled drivers build config 00:01:48.130 net/dpaa: not in enabled drivers build config 00:01:48.130 net/dpaa2: not in enabled drivers build config 00:01:48.130 net/e1000: not in enabled drivers build config 00:01:48.130 net/ena: not in enabled drivers build config 00:01:48.130 net/enetc: not in enabled drivers build config 00:01:48.130 net/enetfec: not in enabled drivers build config 00:01:48.130 net/enic: not in enabled drivers build config 00:01:48.130 net/failsafe: not in enabled drivers build config 00:01:48.130 net/fm10k: not in enabled drivers build config 00:01:48.130 net/gve: not in enabled drivers build config 00:01:48.130 net/hinic: not in enabled drivers build config 00:01:48.130 net/hns3: not in enabled drivers build config 00:01:48.130 net/iavf: not in enabled drivers build config 00:01:48.130 net/ice: not in enabled drivers build config 00:01:48.130 net/idpf: not in enabled drivers build config 00:01:48.130 net/igc: not in enabled drivers build config 00:01:48.130 net/ionic: not in enabled drivers build config 00:01:48.130 net/ipn3ke: not in enabled drivers build config 00:01:48.130 net/ixgbe: not in enabled drivers build config 00:01:48.130 net/kni: not in enabled drivers build config 00:01:48.130 net/liquidio: not in enabled drivers build config 00:01:48.130 net/mana: not in enabled drivers build config 00:01:48.130 net/memif: not in enabled drivers build config 00:01:48.130 net/mlx4: not in enabled drivers build config 00:01:48.130 net/mlx5: not in enabled drivers build config 00:01:48.130 net/mvneta: not in enabled drivers build config 00:01:48.130 net/mvpp2: not in enabled drivers build config 00:01:48.130 net/netvsc: not in enabled drivers build config 00:01:48.130 net/nfb: not in enabled drivers build config 00:01:48.130 net/nfp: not in enabled drivers build config 00:01:48.130 net/ngbe: not in enabled drivers build config 00:01:48.130 net/null: not in enabled drivers build config 00:01:48.130 net/octeontx: not in enabled drivers build config 00:01:48.130 net/octeon_ep: not in enabled drivers build config 00:01:48.130 net/pcap: not in enabled drivers build config 00:01:48.130 net/pfe: not in enabled drivers build config 00:01:48.130 net/qede: not in enabled drivers build config 00:01:48.130 net/ring: not in enabled drivers build config 00:01:48.130 net/sfc: not in enabled drivers build config 00:01:48.130 net/softnic: not in enabled drivers build config 00:01:48.130 net/tap: not in enabled drivers build config 00:01:48.130 net/thunderx: not in enabled drivers build config 00:01:48.130 net/txgbe: not in enabled drivers build config 00:01:48.130 net/vdev_netvsc: not in enabled drivers build config 00:01:48.130 net/vhost: not in enabled drivers build config 00:01:48.130 net/virtio: not in enabled drivers build config 00:01:48.130 net/vmxnet3: not in enabled drivers build config 00:01:48.130 raw/cnxk_bphy: not in enabled drivers build config 00:01:48.130 raw/cnxk_gpio: not in enabled drivers build config 00:01:48.130 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:48.130 raw/ifpga: not in enabled drivers build config 00:01:48.130 raw/ntb: not in enabled drivers build config 00:01:48.130 raw/skeleton: not in enabled drivers build config 00:01:48.130 crypto/armv8: not in enabled drivers build config 00:01:48.130 crypto/bcmfs: not in enabled drivers build config 00:01:48.130 crypto/caam_jr: not in enabled drivers build config 00:01:48.130 crypto/ccp: not in enabled drivers build config 00:01:48.130 crypto/cnxk: not in enabled drivers build config 00:01:48.130 crypto/dpaa_sec: not in enabled drivers build config 00:01:48.130 crypto/dpaa2_sec: not in enabled drivers build config 00:01:48.130 crypto/ipsec_mb: not in enabled drivers build config 00:01:48.130 crypto/mlx5: not in enabled drivers build config 00:01:48.130 crypto/mvsam: not in enabled drivers build config 00:01:48.130 crypto/nitrox: not in enabled drivers build config 00:01:48.130 crypto/null: not in enabled drivers build config 00:01:48.130 crypto/octeontx: not in enabled drivers build config 00:01:48.130 crypto/openssl: not in enabled drivers build config 00:01:48.130 crypto/scheduler: not in enabled drivers build config 00:01:48.130 crypto/uadk: not in enabled drivers build config 00:01:48.130 crypto/virtio: not in enabled drivers build config 00:01:48.130 compress/isal: not in enabled drivers build config 00:01:48.130 compress/mlx5: not in enabled drivers build config 00:01:48.130 compress/octeontx: not in enabled drivers build config 00:01:48.130 compress/zlib: not in enabled drivers build config 00:01:48.130 regex/mlx5: not in enabled drivers build config 00:01:48.130 regex/cn9k: not in enabled drivers build config 00:01:48.130 vdpa/ifc: not in enabled drivers build config 00:01:48.130 vdpa/mlx5: not in enabled drivers build config 00:01:48.130 vdpa/sfc: not in enabled drivers build config 00:01:48.130 event/cnxk: not in enabled drivers build config 00:01:48.130 event/dlb2: not in enabled drivers build config 00:01:48.130 event/dpaa: not in enabled drivers build config 00:01:48.130 event/dpaa2: not in enabled drivers build config 00:01:48.130 event/dsw: not in enabled drivers build config 00:01:48.130 event/opdl: not in enabled drivers build config 00:01:48.130 event/skeleton: not in enabled drivers build config 00:01:48.130 event/sw: not in enabled drivers build config 00:01:48.130 event/octeontx: not in enabled drivers build config 00:01:48.130 baseband/acc: not in enabled drivers build config 00:01:48.130 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:48.130 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:48.130 baseband/la12xx: not in enabled drivers build config 00:01:48.130 baseband/null: not in enabled drivers build config 00:01:48.130 baseband/turbo_sw: not in enabled drivers build config 00:01:48.130 gpu/cuda: not in enabled drivers build config 00:01:48.130 00:01:48.130 00:01:48.130 Build targets in project: 316 00:01:48.130 00:01:48.130 DPDK 22.11.4 00:01:48.130 00:01:48.130 User defined options 00:01:48.130 libdir : lib 00:01:48.130 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:48.130 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:48.130 c_link_args : 00:01:48.130 enable_docs : false 00:01:48.130 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:48.130 enable_kmods : false 00:01:48.130 machine : native 00:01:48.130 tests : false 00:01:48.130 00:01:48.130 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:48.130 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:48.130 00:44:18 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:48.130 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:48.130 [1/745] Generating lib/rte_kvargs_mingw with a custom command 00:01:48.130 [2/745] Generating lib/rte_kvargs_def with a custom command 00:01:48.130 [3/745] Generating lib/rte_telemetry_mingw with a custom command 00:01:48.130 [4/745] Generating lib/rte_telemetry_def with a custom command 00:01:48.130 [5/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:48.130 [6/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:48.130 [7/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:48.130 [8/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:48.130 [9/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:48.130 [10/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:48.130 [11/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:48.130 [12/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:48.130 [13/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:48.130 [14/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:48.130 [15/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:48.130 [16/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:48.130 [17/745] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:48.392 [18/745] Linking static target lib/librte_kvargs.a 00:01:48.392 [19/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:48.392 [20/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:48.392 [21/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:48.392 [22/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:48.392 [23/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:48.392 [24/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:48.392 [25/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:48.392 [26/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:48.392 [27/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:48.392 [28/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:48.392 [29/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:48.392 [30/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:48.392 [31/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:48.392 [32/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:48.392 [33/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:48.392 [34/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:48.392 [35/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:48.392 [36/745] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:48.392 [37/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:48.392 [38/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:48.392 [39/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:48.392 [40/745] Generating lib/rte_eal_def with a custom command 00:01:48.392 [41/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:48.392 [42/745] Generating lib/rte_eal_mingw with a custom command 00:01:48.392 [43/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:48.392 [44/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:48.392 [45/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:48.392 [46/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:48.392 [47/745] Generating lib/rte_ring_def with a custom command 00:01:48.392 [48/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:48.392 [49/745] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:48.392 [50/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:48.392 [51/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:48.392 [52/745] Generating lib/rte_ring_mingw with a custom command 00:01:48.392 [53/745] Generating lib/rte_rcu_def with a custom command 00:01:48.392 [54/745] Generating lib/rte_rcu_mingw with a custom command 00:01:48.392 [55/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:48.392 [56/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:48.392 [57/745] Generating lib/rte_mempool_def with a custom command 00:01:48.392 [58/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:48.392 [59/745] Generating lib/rte_mempool_mingw with a custom command 00:01:48.392 [60/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:48.392 [61/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:48.392 [62/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:48.392 [63/745] Generating lib/rte_mbuf_mingw with a custom command 00:01:48.392 [64/745] Generating lib/rte_mbuf_def with a custom command 00:01:48.392 [65/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:48.392 [66/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:48.651 [67/745] Generating lib/rte_net_mingw with a custom command 00:01:48.651 [68/745] Generating lib/rte_meter_def with a custom command 00:01:48.651 [69/745] Generating lib/rte_net_def with a custom command 00:01:48.651 [70/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:48.651 [71/745] Generating lib/rte_meter_mingw with a custom command 00:01:48.651 [72/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:48.651 [73/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:48.651 [74/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:48.651 [75/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:48.651 [76/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:48.651 [77/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:48.651 [78/745] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.651 [79/745] Generating lib/rte_ethdev_def with a custom command 00:01:48.651 [80/745] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:48.651 [81/745] Linking static target lib/librte_ring.a 00:01:48.651 [82/745] Generating lib/rte_ethdev_mingw with a custom command 00:01:48.651 [83/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:48.651 [84/745] Linking target lib/librte_kvargs.so.23.0 00:01:48.651 [85/745] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:48.915 [86/745] Generating lib/rte_pci_def with a custom command 00:01:48.915 [87/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:48.915 [88/745] Linking static target lib/librte_meter.a 00:01:48.915 [89/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:48.915 [90/745] Generating lib/rte_pci_mingw with a custom command 00:01:48.915 [91/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:48.915 [92/745] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:48.915 [93/745] Linking static target lib/librte_pci.a 00:01:48.915 [94/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:48.915 [95/745] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:48.915 [96/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:48.915 [97/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:48.915 [98/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:49.176 [99/745] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.176 [100/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:49.176 [101/745] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.176 [102/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:49.176 [103/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:49.177 [104/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:49.177 [105/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:49.177 [106/745] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.177 [107/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:49.177 [108/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:49.177 [109/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:49.177 [110/745] Generating lib/rte_cmdline_def with a custom command 00:01:49.177 [111/745] Generating lib/rte_cmdline_mingw with a custom command 00:01:49.177 [112/745] Linking static target lib/librte_telemetry.a 00:01:49.177 [113/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:49.177 [114/745] Generating lib/rte_metrics_mingw with a custom command 00:01:49.177 [115/745] Generating lib/rte_metrics_def with a custom command 00:01:49.177 [116/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:49.436 [117/745] Generating lib/rte_hash_def with a custom command 00:01:49.436 [118/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:49.436 [119/745] Generating lib/rte_hash_mingw with a custom command 00:01:49.436 [120/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:49.436 [121/745] Generating lib/rte_timer_def with a custom command 00:01:49.436 [122/745] Generating lib/rte_timer_mingw with a custom command 00:01:49.436 [123/745] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:49.436 [124/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:49.436 [125/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:49.702 [126/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:49.702 [127/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:49.702 [128/745] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:49.702 [129/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:49.702 [130/745] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:49.702 [131/745] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:49.702 [132/745] Generating lib/rte_acl_def with a custom command 00:01:49.702 [133/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:49.702 [134/745] Generating lib/rte_acl_mingw with a custom command 00:01:49.702 [135/745] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:49.702 [136/745] Generating lib/rte_bbdev_def with a custom command 00:01:49.702 [137/745] Generating lib/rte_bbdev_mingw with a custom command 00:01:49.702 [138/745] Generating lib/rte_bitratestats_def with a custom command 00:01:49.702 [139/745] Generating lib/rte_bitratestats_mingw with a custom command 00:01:49.702 [140/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:49.702 [141/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:49.702 [142/745] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:49.702 [143/745] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.960 [144/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:49.960 [145/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:49.960 [146/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:49.960 [147/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:49.960 [148/745] Linking target lib/librte_telemetry.so.23.0 00:01:49.960 [149/745] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:49.960 [150/745] Generating lib/rte_bpf_def with a custom command 00:01:49.960 [151/745] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:49.960 [152/745] Generating lib/rte_bpf_mingw with a custom command 00:01:49.960 [153/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:49.960 [154/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:49.960 [155/745] Generating lib/rte_cfgfile_def with a custom command 00:01:49.960 [156/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:49.960 [157/745] Generating lib/rte_cfgfile_mingw with a custom command 00:01:49.960 [158/745] Generating lib/rte_compressdev_def with a custom command 00:01:49.960 [159/745] Generating lib/rte_compressdev_mingw with a custom command 00:01:49.960 [160/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:49.960 [161/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:50.219 [162/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:50.219 [163/745] Generating lib/rte_cryptodev_def with a custom command 00:01:50.219 [164/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:50.219 [165/745] Generating lib/rte_cryptodev_mingw with a custom command 00:01:50.219 [166/745] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:50.219 [167/745] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:50.219 [168/745] Linking static target lib/librte_rcu.a 00:01:50.219 [169/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:50.219 [170/745] Generating lib/rte_distributor_def with a custom command 00:01:50.219 [171/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:50.219 [172/745] Generating lib/rte_distributor_mingw with a custom command 00:01:50.219 [173/745] Linking static target lib/librte_cmdline.a 00:01:50.219 [174/745] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:50.219 [175/745] Linking static target lib/librte_timer.a 00:01:50.219 [176/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:50.219 [177/745] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:50.219 [178/745] Generating lib/rte_efd_def with a custom command 00:01:50.219 [179/745] Generating lib/rte_efd_mingw with a custom command 00:01:50.219 [180/745] Linking static target lib/librte_net.a 00:01:50.479 [181/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:50.479 [182/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:50.479 [183/745] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:50.479 [184/745] Linking static target lib/librte_metrics.a 00:01:50.479 [185/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:50.479 [186/745] Linking static target lib/librte_cfgfile.a 00:01:50.479 [187/745] Linking static target lib/librte_mempool.a 00:01:50.739 [188/745] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:50.739 [189/745] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.739 [190/745] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.739 [191/745] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.739 [192/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:50.739 [193/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:50.739 [194/745] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:50.739 [195/745] Linking static target lib/librte_eal.a 00:01:50.739 [196/745] Generating lib/rte_eventdev_def with a custom command 00:01:50.739 [197/745] Generating lib/rte_eventdev_mingw with a custom command 00:01:51.003 [198/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:51.003 [199/745] Generating lib/rte_gpudev_def with a custom command 00:01:51.003 [200/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:51.003 [201/745] Generating lib/rte_gpudev_mingw with a custom command 00:01:51.003 [202/745] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:51.003 [203/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:51.003 [204/745] Linking static target lib/librte_bitratestats.a 00:01:51.003 [205/745] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.003 [206/745] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:51.003 [207/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:51.003 [208/745] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.003 [209/745] Generating lib/rte_gro_def with a custom command 00:01:51.003 [210/745] Generating lib/rte_gro_mingw with a custom command 00:01:51.264 [211/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:51.264 [212/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:51.264 [213/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:51.264 [214/745] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.264 [215/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:51.264 [216/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:51.264 [217/745] Generating lib/rte_gso_def with a custom command 00:01:51.264 [218/745] Generating lib/rte_gso_mingw with a custom command 00:01:51.528 [219/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:51.528 [220/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:51.528 [221/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:51.528 [222/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:51.528 [223/745] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:51.528 [224/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:51.528 [225/745] Linking static target lib/librte_bbdev.a 00:01:51.528 [226/745] Generating lib/rte_ip_frag_mingw with a custom command 00:01:51.528 [227/745] Generating lib/rte_ip_frag_def with a custom command 00:01:51.528 [228/745] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.791 [229/745] Generating lib/rte_jobstats_def with a custom command 00:01:51.791 [230/745] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.791 [231/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:51.791 [232/745] Generating lib/rte_jobstats_mingw with a custom command 00:01:51.791 [233/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:51.791 [234/745] Generating lib/rte_latencystats_def with a custom command 00:01:51.791 [235/745] Generating lib/rte_latencystats_mingw with a custom command 00:01:51.791 [236/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:51.791 [237/745] Linking static target lib/librte_compressdev.a 00:01:51.791 [238/745] Generating lib/rte_lpm_def with a custom command 00:01:51.791 [239/745] Generating lib/rte_lpm_mingw with a custom command 00:01:51.791 [240/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:51.791 [241/745] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:52.055 [242/745] Linking static target lib/librte_jobstats.a 00:01:52.055 [243/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:52.055 [244/745] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:52.055 [245/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:52.055 [246/745] Generating lib/rte_member_def with a custom command 00:01:52.316 [247/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:52.316 [248/745] Linking static target lib/librte_distributor.a 00:01:52.316 [249/745] Generating lib/rte_member_mingw with a custom command 00:01:52.316 [250/745] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:52.316 [251/745] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:52.316 [252/745] Generating lib/rte_pcapng_def with a custom command 00:01:52.316 [253/745] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.316 [254/745] Generating lib/rte_pcapng_mingw with a custom command 00:01:52.316 [255/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:52.316 [256/745] Linking static target lib/librte_bpf.a 00:01:52.587 [257/745] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.587 [258/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:52.587 [259/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:52.587 [260/745] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:52.587 [261/745] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:52.587 [262/745] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:52.587 [263/745] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:52.587 [264/745] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:52.587 [265/745] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.587 [266/745] Generating lib/rte_power_def with a custom command 00:01:52.587 [267/745] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:52.587 [268/745] Linking static target lib/librte_gpudev.a 00:01:52.587 [269/745] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:52.587 [270/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:52.587 [271/745] Generating lib/rte_power_mingw with a custom command 00:01:52.587 [272/745] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:52.587 [273/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:52.850 [274/745] Generating lib/rte_rawdev_def with a custom command 00:01:52.851 [275/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:52.851 [276/745] Generating lib/rte_rawdev_mingw with a custom command 00:01:52.851 [277/745] Linking static target lib/librte_gro.a 00:01:52.851 [278/745] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:52.851 [279/745] Generating lib/rte_regexdev_def with a custom command 00:01:52.851 [280/745] Generating lib/rte_regexdev_mingw with a custom command 00:01:52.851 [281/745] Generating lib/rte_dmadev_def with a custom command 00:01:52.851 [282/745] Generating lib/rte_dmadev_mingw with a custom command 00:01:52.851 [283/745] Generating lib/rte_rib_def with a custom command 00:01:52.851 [284/745] Generating lib/rte_rib_mingw with a custom command 00:01:52.851 [285/745] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.851 [286/745] Generating lib/rte_reorder_def with a custom command 00:01:52.851 [287/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:52.851 [288/745] Generating lib/rte_reorder_mingw with a custom command 00:01:53.114 [289/745] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:01:53.114 [290/745] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.114 [291/745] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:53.114 [292/745] Generating lib/rte_sched_def with a custom command 00:01:53.114 [293/745] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:53.114 [294/745] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:53.114 [295/745] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:53.114 [296/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:53.114 [297/745] Generating lib/rte_sched_mingw with a custom command 00:01:53.114 [298/745] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:53.114 [299/745] Generating lib/rte_security_def with a custom command 00:01:53.114 [300/745] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:53.114 [301/745] Generating lib/rte_security_mingw with a custom command 00:01:53.114 [302/745] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.114 [303/745] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:53.381 [304/745] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:53.381 [305/745] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:53.381 [306/745] Generating lib/rte_stack_def with a custom command 00:01:53.381 [307/745] Generating lib/rte_stack_mingw with a custom command 00:01:53.381 [308/745] Linking static target lib/librte_latencystats.a 00:01:53.381 [309/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:53.381 [310/745] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:53.381 [311/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:53.381 [312/745] Linking static target lib/librte_rawdev.a 00:01:53.381 [313/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:53.381 [314/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:53.381 [315/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:53.381 [316/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:53.381 [317/745] Linking static target lib/librte_stack.a 00:01:53.381 [318/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:53.381 [319/745] Generating lib/rte_vhost_def with a custom command 00:01:53.381 [320/745] Generating lib/rte_vhost_mingw with a custom command 00:01:53.381 [321/745] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:53.381 [322/745] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:53.645 [323/745] Linking static target lib/librte_dmadev.a 00:01:53.645 [324/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:53.645 [325/745] Linking static target lib/librte_ip_frag.a 00:01:53.645 [326/745] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.645 [327/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:53.645 [328/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:53.645 [329/745] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.645 [330/745] Generating lib/rte_ipsec_mingw with a custom command 00:01:53.645 [331/745] Generating lib/rte_ipsec_def with a custom command 00:01:53.908 [332/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:53.908 [333/745] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:01:53.908 [334/745] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.908 [335/745] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.170 [336/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:54.170 [337/745] Generating lib/rte_fib_def with a custom command 00:01:54.170 [338/745] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.170 [339/745] Generating lib/rte_fib_mingw with a custom command 00:01:54.170 [340/745] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:54.170 [341/745] Linking static target lib/librte_gso.a 00:01:54.170 [342/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:54.170 [343/745] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:54.170 [344/745] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:54.170 [345/745] Linking static target lib/librte_regexdev.a 00:01:54.433 [346/745] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.433 [347/745] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.433 [348/745] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:54.433 [349/745] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:54.433 [350/745] Linking static target lib/librte_efd.a 00:01:54.433 [351/745] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:54.433 [352/745] Linking static target lib/librte_pcapng.a 00:01:54.697 [353/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:54.697 [354/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:54.697 [355/745] Linking static target lib/librte_lpm.a 00:01:54.697 [356/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:54.697 [357/745] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:54.697 [358/745] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:54.697 [359/745] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:54.959 [360/745] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:54.959 [361/745] Linking static target lib/librte_reorder.a 00:01:54.959 [362/745] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.959 [363/745] Generating lib/rte_port_def with a custom command 00:01:54.959 [364/745] Generating lib/rte_port_mingw with a custom command 00:01:54.959 [365/745] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:54.959 [366/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:54.959 [367/745] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.959 [368/745] Linking static target lib/acl/libavx2_tmp.a 00:01:54.959 [369/745] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:54.960 [370/745] Generating lib/rte_pdump_mingw with a custom command 00:01:54.960 [371/745] Generating lib/rte_pdump_def with a custom command 00:01:54.960 [372/745] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:54.960 [373/745] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:54.960 [374/745] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:55.226 [375/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:55.226 [376/745] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:55.226 [377/745] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:55.226 [378/745] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:55.226 [379/745] Linking static target lib/librte_security.a 00:01:55.226 [380/745] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:55.226 [381/745] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:55.226 [382/745] Linking static target lib/librte_power.a 00:01:55.226 [383/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:55.226 [384/745] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.226 [385/745] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:55.226 [386/745] Linking static target lib/librte_hash.a 00:01:55.226 [387/745] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.226 [388/745] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:55.490 [389/745] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.490 [390/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:55.490 [391/745] Linking static target lib/librte_rib.a 00:01:55.490 [392/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:55.751 [393/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:55.751 [394/745] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:55.751 [395/745] Linking static target lib/acl/libavx512_tmp.a 00:01:55.751 [396/745] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:55.751 [397/745] Linking static target lib/librte_acl.a 00:01:55.751 [398/745] Generating lib/rte_table_def with a custom command 00:01:55.751 [399/745] Generating lib/rte_table_mingw with a custom command 00:01:55.751 [400/745] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.018 [401/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:56.018 [402/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:56.018 [403/745] Linking static target lib/librte_ethdev.a 00:01:56.276 [404/745] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.276 [405/745] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.276 [406/745] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:56.276 [407/745] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:56.276 [408/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:56.276 [409/745] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:56.276 [410/745] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:56.276 [411/745] Generating lib/rte_pipeline_def with a custom command 00:01:56.276 [412/745] Generating lib/rte_pipeline_mingw with a custom command 00:01:56.276 [413/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:56.276 [414/745] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:56.276 [415/745] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.276 [416/745] Linking static target lib/librte_mbuf.a 00:01:56.536 [417/745] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:56.536 [418/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:56.536 [419/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:56.536 [420/745] Generating lib/rte_graph_def with a custom command 00:01:56.536 [421/745] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:56.536 [422/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:56.536 [423/745] Generating lib/rte_graph_mingw with a custom command 00:01:56.536 [424/745] Linking static target lib/librte_fib.a 00:01:56.536 [425/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:56.797 [426/745] Linking static target lib/librte_eventdev.a 00:01:56.797 [427/745] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:56.797 [428/745] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.797 [429/745] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:56.797 [430/745] Linking static target lib/librte_member.a 00:01:56.797 [431/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:56.797 [432/745] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:56.797 [433/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:56.797 [434/745] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:56.797 [435/745] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:56.797 [436/745] Generating lib/rte_node_def with a custom command 00:01:56.797 [437/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:56.797 [438/745] Generating lib/rte_node_mingw with a custom command 00:01:57.059 [439/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:57.059 [440/745] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.059 [441/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:57.059 [442/745] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:57.059 [443/745] Linking static target lib/librte_sched.a 00:01:57.059 [444/745] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:57.321 [445/745] Generating drivers/rte_bus_pci_def with a custom command 00:01:57.321 [446/745] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.321 [447/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:57.321 [448/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:57.321 [449/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:57.321 [450/745] Generating drivers/rte_bus_pci_mingw with a custom command 00:01:57.321 [451/745] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.321 [452/745] Generating drivers/rte_mempool_ring_def with a custom command 00:01:57.321 [453/745] Generating drivers/rte_bus_vdev_def with a custom command 00:01:57.321 [454/745] Generating drivers/rte_bus_vdev_mingw with a custom command 00:01:57.321 [455/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:57.321 [456/745] Generating drivers/rte_mempool_ring_mingw with a custom command 00:01:57.321 [457/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:57.321 [458/745] Linking static target lib/librte_cryptodev.a 00:01:57.585 [459/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:57.585 [460/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:57.585 [461/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:57.585 [462/745] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:57.585 [463/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:57.585 [464/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:57.585 [465/745] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:57.585 [466/745] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:57.585 [467/745] Linking static target lib/librte_pdump.a 00:01:57.585 [468/745] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:57.585 [469/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:57.849 [470/745] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:57.849 [471/745] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:57.849 [472/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:57.849 [473/745] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:57.849 [474/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:57.849 [475/745] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.849 [476/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:57.849 [477/745] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:57.849 [478/745] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:57.849 [479/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:57.849 [480/745] Generating drivers/rte_net_i40e_def with a custom command 00:01:58.115 [481/745] Generating drivers/rte_net_i40e_mingw with a custom command 00:01:58.115 [482/745] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:58.115 [483/745] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:58.115 [484/745] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.115 [485/745] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:58.115 [486/745] Linking static target drivers/librte_bus_vdev.a 00:01:58.115 [487/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:58.115 [488/745] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:58.115 [489/745] Linking static target lib/librte_ipsec.a 00:01:58.115 [490/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:58.115 [491/745] Linking static target lib/librte_table.a 00:01:58.376 [492/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:58.376 [493/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:58.376 [494/745] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:58.643 [495/745] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.643 [496/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:58.643 [497/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:58.643 [498/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:58.643 [499/745] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.643 [500/745] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:58.643 [501/745] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:58.643 [502/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:58.643 [503/745] Linking static target lib/librte_graph.a 00:01:58.904 [504/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:58.904 [505/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:58.904 [506/745] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:58.904 [507/745] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:58.904 [508/745] Linking static target drivers/librte_bus_pci.a 00:01:58.904 [509/745] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:58.904 [510/745] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:58.904 [511/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:59.169 [512/745] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:59.169 [513/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:59.169 [514/745] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.435 [515/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:59.435 [516/745] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.699 [517/745] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.699 [518/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:59.699 [519/745] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:59.699 [520/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:59.699 [521/745] Linking static target lib/librte_port.a 00:01:59.699 [522/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:59.960 [523/745] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:59.960 [524/745] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:59.960 [525/745] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:59.960 [526/745] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:00.235 [527/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:00.235 [528/745] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.235 [529/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:00.235 [530/745] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:00.235 [531/745] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:00.235 [532/745] Linking static target drivers/librte_mempool_ring.a 00:02:00.235 [533/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:00.513 [534/745] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:00.513 [535/745] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:00.513 [536/745] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:00.513 [537/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:00.513 [538/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:00.514 [539/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:00.792 [540/745] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.792 [541/745] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.073 [542/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:01.073 [543/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:01.073 [544/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:01.341 [545/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:01.341 [546/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:01.341 [547/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:01.341 [548/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:01.341 [549/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:01.599 [550/745] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:01.599 [551/745] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:01.599 [552/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:01.858 [553/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:01.858 [554/745] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:01.858 [555/745] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:02.121 [556/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:02.121 [557/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:02.121 [558/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:02.121 [559/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:02.383 [560/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:02.642 [561/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:02.642 [562/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:02.642 [563/745] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:02.642 [564/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:02.642 [565/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:02.642 [566/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:02.642 [567/745] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:02.907 [568/745] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:02.907 [569/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:02.907 [570/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:02.907 [571/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:02.907 [572/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:03.169 [573/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:03.169 [574/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:03.429 [575/745] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.429 [576/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:03.430 [577/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:03.430 [578/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:03.430 [579/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:03.430 [580/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:03.430 [581/745] Linking target lib/librte_eal.so.23.0 00:02:03.430 [582/745] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:03.430 [583/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:03.430 [584/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:03.430 [585/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:03.693 [586/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:03.693 [587/745] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:03.693 [588/745] Linking target lib/librte_ring.so.23.0 00:02:03.956 [589/745] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.956 [590/745] Linking target lib/librte_meter.so.23.0 00:02:03.956 [591/745] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:03.956 [592/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:03.956 [593/745] Linking target lib/librte_rcu.so.23.0 00:02:04.218 [594/745] Linking target lib/librte_mempool.so.23.0 00:02:04.218 [595/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:04.218 [596/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:04.218 [597/745] Linking target lib/librte_pci.so.23.0 00:02:04.218 [598/745] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:04.218 [599/745] Linking target lib/librte_timer.so.23.0 00:02:04.218 [600/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:04.218 [601/745] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:04.218 [602/745] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:04.481 [603/745] Linking target lib/librte_acl.so.23.0 00:02:04.481 [604/745] Linking target lib/librte_cfgfile.so.23.0 00:02:04.481 [605/745] Linking target lib/librte_jobstats.so.23.0 00:02:04.481 [606/745] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:04.481 [607/745] Linking target lib/librte_rawdev.so.23.0 00:02:04.481 [608/745] Linking target lib/librte_dmadev.so.23.0 00:02:04.481 [609/745] Linking target lib/librte_stack.so.23.0 00:02:04.481 [610/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:04.481 [611/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:04.481 [612/745] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:04.481 [613/745] Linking target lib/librte_mbuf.so.23.0 00:02:04.481 [614/745] Linking target lib/librte_rib.so.23.0 00:02:04.481 [615/745] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:04.481 [616/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:04.481 [617/745] Linking target drivers/librte_bus_vdev.so.23.0 00:02:04.481 [618/745] Linking target drivers/librte_mempool_ring.so.23.0 00:02:04.481 [619/745] Linking target lib/librte_graph.so.23.0 00:02:04.481 [620/745] Linking target drivers/librte_bus_pci.so.23.0 00:02:04.481 [621/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:04.481 [622/745] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:04.740 [623/745] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:04.740 [624/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:04.740 [625/745] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:04.740 [626/745] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:04.740 [627/745] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:04.740 [628/745] Linking target lib/librte_compressdev.so.23.0 00:02:04.740 [629/745] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:04.740 [630/745] Linking target lib/librte_bbdev.so.23.0 00:02:04.740 [631/745] Linking target lib/librte_gpudev.so.23.0 00:02:04.740 [632/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:04.740 [633/745] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:04.740 [634/745] Linking target lib/librte_net.so.23.0 00:02:04.740 [635/745] Linking target lib/librte_regexdev.so.23.0 00:02:04.740 [636/745] Linking target lib/librte_distributor.so.23.0 00:02:04.740 [637/745] Linking target lib/librte_cryptodev.so.23.0 00:02:04.740 [638/745] Linking target lib/librte_reorder.so.23.0 00:02:04.740 [639/745] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:04.740 [640/745] Linking target lib/librte_sched.so.23.0 00:02:04.740 [641/745] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:04.740 [642/745] Linking target lib/librte_fib.so.23.0 00:02:04.998 [643/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:04.998 [644/745] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:04.998 [645/745] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:04.998 [646/745] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:04.998 [647/745] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:04.998 [648/745] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:04.998 [649/745] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:04.998 [650/745] Linking target lib/librte_hash.so.23.0 00:02:04.998 [651/745] Linking target lib/librte_security.so.23.0 00:02:04.998 [652/745] Linking target lib/librte_cmdline.so.23.0 00:02:04.998 [653/745] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:04.998 [654/745] Linking target lib/librte_ethdev.so.23.0 00:02:05.256 [655/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:05.256 [656/745] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:05.256 [657/745] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:05.256 [658/745] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:05.256 [659/745] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:05.256 [660/745] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:05.256 [661/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:05.256 [662/745] Linking target lib/librte_efd.so.23.0 00:02:05.256 [663/745] Linking target lib/librte_lpm.so.23.0 00:02:05.256 [664/745] Linking target lib/librte_member.so.23.0 00:02:05.256 [665/745] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:05.256 [666/745] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:05.256 [667/745] Linking target lib/librte_ipsec.so.23.0 00:02:05.256 [668/745] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:05.256 [669/745] Linking target lib/librte_ip_frag.so.23.0 00:02:05.256 [670/745] Linking target lib/librte_bpf.so.23.0 00:02:05.256 [671/745] Linking target lib/librte_pcapng.so.23.0 00:02:05.514 [672/745] Linking target lib/librte_power.so.23.0 00:02:05.514 [673/745] Linking target lib/librte_metrics.so.23.0 00:02:05.514 [674/745] Linking target lib/librte_gso.so.23.0 00:02:05.514 [675/745] Linking target lib/librte_gro.so.23.0 00:02:05.514 [676/745] Linking target lib/librte_eventdev.so.23.0 00:02:05.514 [677/745] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:05.514 [678/745] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:05.514 [679/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:05.514 [680/745] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:05.514 [681/745] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:05.514 [682/745] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:05.514 [683/745] Linking target lib/librte_pdump.so.23.0 00:02:05.514 [684/745] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:05.514 [685/745] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:05.514 [686/745] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:05.514 [687/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:05.514 [688/745] Linking target lib/librte_bitratestats.so.23.0 00:02:05.514 [689/745] Linking target lib/librte_latencystats.so.23.0 00:02:05.514 [690/745] Linking target lib/librte_port.so.23.0 00:02:05.772 [691/745] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:05.772 [692/745] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:05.772 [693/745] Linking target lib/librte_table.so.23.0 00:02:06.031 [694/745] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:06.031 [695/745] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:06.289 [696/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:06.547 [697/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:06.547 [698/745] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:06.547 [699/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:06.805 [700/745] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:06.805 [701/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:06.805 [702/745] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:06.805 [703/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:07.371 [704/745] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:07.371 [705/745] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:07.371 [706/745] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:07.371 [707/745] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:07.371 [708/745] Linking static target drivers/librte_net_i40e.a 00:02:07.371 [709/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:07.628 [710/745] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:07.886 [711/745] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.886 [712/745] Linking target drivers/librte_net_i40e.so.23.0 00:02:08.450 [713/745] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:08.450 [714/745] Linking static target lib/librte_node.a 00:02:08.708 [715/745] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.708 [716/745] Linking target lib/librte_node.so.23.0 00:02:08.965 [717/745] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:09.897 [718/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:10.154 [719/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:18.256 [720/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:50.324 [721/745] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:50.324 [722/745] Linking static target lib/librte_vhost.a 00:02:50.324 [723/745] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.324 [724/745] Linking target lib/librte_vhost.so.23.0 00:03:08.401 [725/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:08.401 [726/745] Linking static target lib/librte_pipeline.a 00:03:08.401 [727/745] Linking target app/dpdk-test-cmdline 00:03:08.401 [728/745] Linking target app/dpdk-pdump 00:03:08.401 [729/745] Linking target app/dpdk-test-acl 00:03:08.401 [730/745] Linking target app/dpdk-dumpcap 00:03:08.401 [731/745] Linking target app/dpdk-test-fib 00:03:08.401 [732/745] Linking target app/dpdk-proc-info 00:03:08.401 [733/745] Linking target app/dpdk-test-security-perf 00:03:08.401 [734/745] Linking target app/dpdk-test-sad 00:03:08.401 [735/745] Linking target app/dpdk-test-gpudev 00:03:08.401 [736/745] Linking target app/dpdk-test-flow-perf 00:03:08.401 [737/745] Linking target app/dpdk-test-pipeline 00:03:08.401 [738/745] Linking target app/dpdk-test-regex 00:03:08.401 [739/745] Linking target app/dpdk-test-bbdev 00:03:08.401 [740/745] Linking target app/dpdk-test-eventdev 00:03:08.401 [741/745] Linking target app/dpdk-test-crypto-perf 00:03:08.401 [742/745] Linking target app/dpdk-test-compress-perf 00:03:08.401 [743/745] Linking target app/dpdk-testpmd 00:03:09.335 [744/745] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.593 [745/745] Linking target lib/librte_pipeline.so.23.0 00:03:09.593 00:45:39 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:03:09.593 00:45:39 build_native_dpdk -- common/autobuild_common.sh@191 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:09.593 00:45:39 build_native_dpdk -- common/autobuild_common.sh@204 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:03:09.593 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:03:09.593 [0/1] Installing files. 00:03:09.855 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:09.855 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.856 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.857 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.858 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.859 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:10.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:10.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:10.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:10.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:10.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:10.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:10.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:10.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:10.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:10.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:10.120 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.120 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.690 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.690 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.690 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.690 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.690 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.690 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.690 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.690 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.690 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.690 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:10.690 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.690 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:10.690 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.690 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:10.690 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.690 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:10.690 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.690 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.690 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.690 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.690 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.690 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.690 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.690 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.690 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.690 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.690 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.690 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.690 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.690 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.690 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.690 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.690 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:10.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:10.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:10.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:10.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:10.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:10.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:10.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:10.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:10.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:10.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:10.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:10.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:10.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:10.694 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:03:10.694 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:03:10.694 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:03:10.694 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:03:10.694 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:03:10.694 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:03:10.694 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:03:10.694 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:03:10.694 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:03:10.694 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:03:10.694 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:03:10.694 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:03:10.694 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:03:10.694 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:03:10.694 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:03:10.694 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:03:10.694 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:03:10.694 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:03:10.694 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:03:10.694 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:03:10.694 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:03:10.694 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:03:10.694 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:03:10.694 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:03:10.694 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:03:10.694 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:03:10.694 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:03:10.694 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:03:10.694 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:03:10.694 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:03:10.694 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:03:10.694 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:03:10.694 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:03:10.694 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:03:10.694 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:03:10.694 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:03:10.694 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:03:10.694 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:03:10.694 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:03:10.694 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:03:10.694 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:03:10.694 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:03:10.694 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:03:10.694 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:03:10.695 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:03:10.695 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:03:10.695 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:03:10.695 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:03:10.695 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:03:10.695 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:03:10.695 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:03:10.695 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:03:10.695 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:03:10.695 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:03:10.695 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:03:10.695 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:03:10.695 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:03:10.695 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:03:10.695 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:03:10.695 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:03:10.695 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:03:10.695 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:03:10.695 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:03:10.695 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:03:10.695 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:03:10.695 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:03:10.695 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:03:10.695 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:03:10.695 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:03:10.695 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:03:10.695 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:03:10.695 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:03:10.695 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:03:10.695 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:03:10.695 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:03:10.695 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:03:10.695 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:03:10.695 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:03:10.695 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:03:10.695 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:03:10.695 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:03:10.695 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:03:10.695 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:03:10.695 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:03:10.695 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:03:10.695 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:03:10.695 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:03:10.695 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:03:10.695 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:03:10.695 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:03:10.695 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:03:10.695 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:03:10.695 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:03:10.695 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:03:10.695 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:03:10.695 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:03:10.695 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:03:10.695 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:03:10.695 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:03:10.695 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:03:10.695 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:03:10.695 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:03:10.695 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:03:10.695 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:03:10.695 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:10.695 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:10.695 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:10.695 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:10.695 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:10.695 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:10.695 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:10.695 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:10.695 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:10.695 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:10.695 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:10.695 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:10.695 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:10.695 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:10.695 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:10.695 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:10.695 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:10.695 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:10.695 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:10.695 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:10.695 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:10.695 00:45:41 build_native_dpdk -- common/autobuild_common.sh@210 -- $ cat 00:03:10.695 00:45:41 build_native_dpdk -- common/autobuild_common.sh@215 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:10.695 00:03:10.695 real 1m27.947s 00:03:10.695 user 14m28.882s 00:03:10.695 sys 1m47.338s 00:03:10.695 00:45:41 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:10.695 00:45:41 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:10.695 ************************************ 00:03:10.695 END TEST build_native_dpdk 00:03:10.695 ************************************ 00:03:10.695 00:45:41 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:10.695 00:45:41 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:10.695 00:45:41 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:10.695 00:45:41 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:10.695 00:45:41 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:10.695 00:45:41 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:10.695 00:45:41 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:10.695 00:45:41 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:03:10.695 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:03:10.953 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.953 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.953 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:11.210 Using 'verbs' RDMA provider 00:03:21.743 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:29.891 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:29.891 Creating mk/config.mk...done. 00:03:29.891 Creating mk/cc.flags.mk...done. 00:03:29.891 Type 'make' to build. 00:03:29.891 00:46:00 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:29.891 00:46:00 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:29.891 00:46:00 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:29.891 00:46:00 -- common/autotest_common.sh@10 -- $ set +x 00:03:29.891 ************************************ 00:03:29.891 START TEST make 00:03:29.891 ************************************ 00:03:29.891 00:46:00 make -- common/autotest_common.sh@1125 -- $ make -j48 00:03:30.150 make[1]: Nothing to be done for 'all'. 00:03:32.074 The Meson build system 00:03:32.074 Version: 1.3.1 00:03:32.074 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:32.074 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:32.074 Build type: native build 00:03:32.074 Project name: libvfio-user 00:03:32.074 Project version: 0.0.1 00:03:32.074 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:32.074 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:32.074 Host machine cpu family: x86_64 00:03:32.074 Host machine cpu: x86_64 00:03:32.074 Run-time dependency threads found: YES 00:03:32.074 Library dl found: YES 00:03:32.074 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:32.074 Run-time dependency json-c found: YES 0.17 00:03:32.074 Run-time dependency cmocka found: YES 1.1.7 00:03:32.074 Program pytest-3 found: NO 00:03:32.074 Program flake8 found: NO 00:03:32.074 Program misspell-fixer found: NO 00:03:32.074 Program restructuredtext-lint found: NO 00:03:32.074 Program valgrind found: YES (/usr/bin/valgrind) 00:03:32.074 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:32.074 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:32.074 Compiler for C supports arguments -Wwrite-strings: YES 00:03:32.074 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:32.074 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:32.074 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:32.074 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:32.074 Build targets in project: 8 00:03:32.074 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:32.074 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:32.074 00:03:32.074 libvfio-user 0.0.1 00:03:32.074 00:03:32.074 User defined options 00:03:32.074 buildtype : debug 00:03:32.074 default_library: shared 00:03:32.074 libdir : /usr/local/lib 00:03:32.074 00:03:32.074 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:32.337 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:32.603 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:32.603 [2/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:32.603 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:32.603 [4/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:32.603 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:32.603 [6/37] Compiling C object samples/null.p/null.c.o 00:03:32.868 [7/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:32.868 [8/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:32.868 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:32.868 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:32.868 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:32.868 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:32.868 [13/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:32.868 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:32.868 [15/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:32.868 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:32.868 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:32.868 [18/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:32.868 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:32.868 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:32.868 [21/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:32.868 [22/37] Compiling C object samples/server.p/server.c.o 00:03:32.868 [23/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:32.868 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:32.868 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:32.868 [26/37] Compiling C object samples/client.p/client.c.o 00:03:32.868 [27/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:33.128 [28/37] Linking target samples/client 00:03:33.128 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:33.128 [30/37] Linking target test/unit_tests 00:03:33.128 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:03:33.387 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:33.387 [33/37] Linking target samples/server 00:03:33.387 [34/37] Linking target samples/null 00:03:33.387 [35/37] Linking target samples/lspci 00:03:33.387 [36/37] Linking target samples/gpio-pci-idio-16 00:03:33.387 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:33.387 INFO: autodetecting backend as ninja 00:03:33.387 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:33.387 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:34.336 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:34.336 ninja: no work to do. 00:03:46.537 CC lib/ut/ut.o 00:03:46.537 CC lib/ut_mock/mock.o 00:03:46.537 CC lib/log/log.o 00:03:46.537 CC lib/log/log_flags.o 00:03:46.537 CC lib/log/log_deprecated.o 00:03:46.537 LIB libspdk_ut.a 00:03:46.537 LIB libspdk_log.a 00:03:46.537 LIB libspdk_ut_mock.a 00:03:46.537 SO libspdk_ut.so.2.0 00:03:46.537 SO libspdk_log.so.7.0 00:03:46.537 SO libspdk_ut_mock.so.6.0 00:03:46.537 SYMLINK libspdk_ut.so 00:03:46.537 SYMLINK libspdk_ut_mock.so 00:03:46.537 SYMLINK libspdk_log.so 00:03:46.537 CC lib/dma/dma.o 00:03:46.537 CC lib/ioat/ioat.o 00:03:46.537 CXX lib/trace_parser/trace.o 00:03:46.537 CC lib/util/base64.o 00:03:46.537 CC lib/util/bit_array.o 00:03:46.537 CC lib/util/cpuset.o 00:03:46.537 CC lib/util/crc16.o 00:03:46.537 CC lib/util/crc32.o 00:03:46.537 CC lib/util/crc32c.o 00:03:46.537 CC lib/util/crc32_ieee.o 00:03:46.537 CC lib/util/crc64.o 00:03:46.537 CC lib/util/dif.o 00:03:46.537 CC lib/util/fd.o 00:03:46.537 CC lib/util/fd_group.o 00:03:46.537 CC lib/util/file.o 00:03:46.537 CC lib/util/hexlify.o 00:03:46.537 CC lib/util/iov.o 00:03:46.537 CC lib/util/math.o 00:03:46.537 CC lib/util/net.o 00:03:46.537 CC lib/util/pipe.o 00:03:46.537 CC lib/util/strerror_tls.o 00:03:46.537 CC lib/util/string.o 00:03:46.537 CC lib/util/uuid.o 00:03:46.537 CC lib/util/xor.o 00:03:46.537 CC lib/util/zipf.o 00:03:46.537 CC lib/vfio_user/host/vfio_user_pci.o 00:03:46.537 CC lib/vfio_user/host/vfio_user.o 00:03:46.537 LIB libspdk_dma.a 00:03:46.537 SO libspdk_dma.so.4.0 00:03:46.537 SYMLINK libspdk_dma.so 00:03:46.537 LIB libspdk_ioat.a 00:03:46.537 SO libspdk_ioat.so.7.0 00:03:46.537 SYMLINK libspdk_ioat.so 00:03:46.537 LIB libspdk_vfio_user.a 00:03:46.537 SO libspdk_vfio_user.so.5.0 00:03:46.537 SYMLINK libspdk_vfio_user.so 00:03:46.795 LIB libspdk_util.a 00:03:46.795 SO libspdk_util.so.10.0 00:03:47.054 SYMLINK libspdk_util.so 00:03:47.054 LIB libspdk_trace_parser.a 00:03:47.054 SO libspdk_trace_parser.so.5.0 00:03:47.054 CC lib/conf/conf.o 00:03:47.054 CC lib/json/json_parse.o 00:03:47.054 CC lib/idxd/idxd.o 00:03:47.054 CC lib/rdma_utils/rdma_utils.o 00:03:47.054 CC lib/env_dpdk/env.o 00:03:47.054 CC lib/vmd/vmd.o 00:03:47.054 CC lib/rdma_provider/common.o 00:03:47.054 CC lib/json/json_util.o 00:03:47.054 CC lib/idxd/idxd_user.o 00:03:47.054 CC lib/env_dpdk/memory.o 00:03:47.054 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:47.054 CC lib/vmd/led.o 00:03:47.054 CC lib/env_dpdk/pci.o 00:03:47.054 CC lib/idxd/idxd_kernel.o 00:03:47.054 CC lib/json/json_write.o 00:03:47.054 CC lib/env_dpdk/init.o 00:03:47.054 CC lib/env_dpdk/threads.o 00:03:47.054 CC lib/env_dpdk/pci_ioat.o 00:03:47.054 CC lib/env_dpdk/pci_virtio.o 00:03:47.054 CC lib/env_dpdk/pci_vmd.o 00:03:47.054 CC lib/env_dpdk/pci_idxd.o 00:03:47.054 CC lib/env_dpdk/pci_event.o 00:03:47.054 CC lib/env_dpdk/sigbus_handler.o 00:03:47.054 CC lib/env_dpdk/pci_dpdk.o 00:03:47.054 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:47.054 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:47.312 SYMLINK libspdk_trace_parser.so 00:03:47.312 LIB libspdk_rdma_provider.a 00:03:47.312 SO libspdk_rdma_provider.so.6.0 00:03:47.312 SYMLINK libspdk_rdma_provider.so 00:03:47.570 LIB libspdk_conf.a 00:03:47.570 LIB libspdk_json.a 00:03:47.570 LIB libspdk_rdma_utils.a 00:03:47.570 SO libspdk_conf.so.6.0 00:03:47.570 SO libspdk_json.so.6.0 00:03:47.570 SO libspdk_rdma_utils.so.1.0 00:03:47.570 SYMLINK libspdk_conf.so 00:03:47.570 SYMLINK libspdk_rdma_utils.so 00:03:47.570 SYMLINK libspdk_json.so 00:03:47.827 LIB libspdk_idxd.a 00:03:47.827 CC lib/jsonrpc/jsonrpc_server.o 00:03:47.827 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:47.827 CC lib/jsonrpc/jsonrpc_client.o 00:03:47.827 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:47.827 SO libspdk_idxd.so.12.0 00:03:47.827 SYMLINK libspdk_idxd.so 00:03:47.827 LIB libspdk_vmd.a 00:03:47.827 SO libspdk_vmd.so.6.0 00:03:47.827 SYMLINK libspdk_vmd.so 00:03:48.086 LIB libspdk_jsonrpc.a 00:03:48.086 SO libspdk_jsonrpc.so.6.0 00:03:48.086 SYMLINK libspdk_jsonrpc.so 00:03:48.344 CC lib/rpc/rpc.o 00:03:48.344 LIB libspdk_rpc.a 00:03:48.602 SO libspdk_rpc.so.6.0 00:03:48.602 SYMLINK libspdk_rpc.so 00:03:48.602 CC lib/trace/trace.o 00:03:48.602 CC lib/notify/notify.o 00:03:48.602 CC lib/trace/trace_flags.o 00:03:48.602 CC lib/notify/notify_rpc.o 00:03:48.602 CC lib/trace/trace_rpc.o 00:03:48.602 CC lib/keyring/keyring.o 00:03:48.602 CC lib/keyring/keyring_rpc.o 00:03:48.860 LIB libspdk_notify.a 00:03:48.860 SO libspdk_notify.so.6.0 00:03:48.860 SYMLINK libspdk_notify.so 00:03:48.860 LIB libspdk_keyring.a 00:03:48.860 LIB libspdk_trace.a 00:03:48.860 SO libspdk_keyring.so.1.0 00:03:49.118 SO libspdk_trace.so.10.0 00:03:49.118 SYMLINK libspdk_keyring.so 00:03:49.118 SYMLINK libspdk_trace.so 00:03:49.118 LIB libspdk_env_dpdk.a 00:03:49.118 SO libspdk_env_dpdk.so.15.0 00:03:49.118 CC lib/thread/thread.o 00:03:49.118 CC lib/thread/iobuf.o 00:03:49.118 CC lib/sock/sock.o 00:03:49.118 CC lib/sock/sock_rpc.o 00:03:49.376 SYMLINK libspdk_env_dpdk.so 00:03:49.634 LIB libspdk_sock.a 00:03:49.634 SO libspdk_sock.so.10.0 00:03:49.634 SYMLINK libspdk_sock.so 00:03:49.891 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:49.891 CC lib/nvme/nvme_ctrlr.o 00:03:49.891 CC lib/nvme/nvme_fabric.o 00:03:49.891 CC lib/nvme/nvme_ns_cmd.o 00:03:49.891 CC lib/nvme/nvme_ns.o 00:03:49.891 CC lib/nvme/nvme_pcie_common.o 00:03:49.891 CC lib/nvme/nvme_pcie.o 00:03:49.891 CC lib/nvme/nvme_qpair.o 00:03:49.891 CC lib/nvme/nvme.o 00:03:49.891 CC lib/nvme/nvme_quirks.o 00:03:49.891 CC lib/nvme/nvme_transport.o 00:03:49.891 CC lib/nvme/nvme_discovery.o 00:03:49.891 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:49.891 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:49.891 CC lib/nvme/nvme_tcp.o 00:03:49.891 CC lib/nvme/nvme_opal.o 00:03:49.891 CC lib/nvme/nvme_io_msg.o 00:03:49.891 CC lib/nvme/nvme_poll_group.o 00:03:49.891 CC lib/nvme/nvme_zns.o 00:03:49.891 CC lib/nvme/nvme_stubs.o 00:03:49.891 CC lib/nvme/nvme_auth.o 00:03:49.891 CC lib/nvme/nvme_cuse.o 00:03:49.891 CC lib/nvme/nvme_vfio_user.o 00:03:49.891 CC lib/nvme/nvme_rdma.o 00:03:50.826 LIB libspdk_thread.a 00:03:50.826 SO libspdk_thread.so.10.1 00:03:50.826 SYMLINK libspdk_thread.so 00:03:51.084 CC lib/init/json_config.o 00:03:51.084 CC lib/virtio/virtio.o 00:03:51.084 CC lib/blob/blobstore.o 00:03:51.084 CC lib/init/subsystem.o 00:03:51.084 CC lib/virtio/virtio_vhost_user.o 00:03:51.084 CC lib/init/subsystem_rpc.o 00:03:51.084 CC lib/virtio/virtio_vfio_user.o 00:03:51.084 CC lib/blob/request.o 00:03:51.084 CC lib/init/rpc.o 00:03:51.084 CC lib/blob/zeroes.o 00:03:51.084 CC lib/virtio/virtio_pci.o 00:03:51.084 CC lib/blob/blob_bs_dev.o 00:03:51.084 CC lib/accel/accel.o 00:03:51.084 CC lib/vfu_tgt/tgt_endpoint.o 00:03:51.084 CC lib/accel/accel_rpc.o 00:03:51.084 CC lib/accel/accel_sw.o 00:03:51.084 CC lib/vfu_tgt/tgt_rpc.o 00:03:51.342 LIB libspdk_init.a 00:03:51.342 SO libspdk_init.so.5.0 00:03:51.342 LIB libspdk_virtio.a 00:03:51.342 LIB libspdk_vfu_tgt.a 00:03:51.342 SYMLINK libspdk_init.so 00:03:51.342 SO libspdk_vfu_tgt.so.3.0 00:03:51.342 SO libspdk_virtio.so.7.0 00:03:51.601 SYMLINK libspdk_vfu_tgt.so 00:03:51.601 SYMLINK libspdk_virtio.so 00:03:51.601 CC lib/event/app.o 00:03:51.601 CC lib/event/reactor.o 00:03:51.601 CC lib/event/log_rpc.o 00:03:51.601 CC lib/event/app_rpc.o 00:03:51.601 CC lib/event/scheduler_static.o 00:03:52.166 LIB libspdk_event.a 00:03:52.166 SO libspdk_event.so.14.0 00:03:52.166 SYMLINK libspdk_event.so 00:03:52.166 LIB libspdk_accel.a 00:03:52.166 SO libspdk_accel.so.16.0 00:03:52.166 SYMLINK libspdk_accel.so 00:03:52.423 CC lib/bdev/bdev.o 00:03:52.423 CC lib/bdev/bdev_rpc.o 00:03:52.423 CC lib/bdev/bdev_zone.o 00:03:52.423 CC lib/bdev/part.o 00:03:52.423 CC lib/bdev/scsi_nvme.o 00:03:52.423 LIB libspdk_nvme.a 00:03:52.681 SO libspdk_nvme.so.13.1 00:03:52.938 SYMLINK libspdk_nvme.so 00:03:53.869 LIB libspdk_blob.a 00:03:54.126 SO libspdk_blob.so.11.0 00:03:54.126 SYMLINK libspdk_blob.so 00:03:54.126 CC lib/lvol/lvol.o 00:03:54.384 CC lib/blobfs/blobfs.o 00:03:54.384 CC lib/blobfs/tree.o 00:03:54.977 LIB libspdk_bdev.a 00:03:54.977 SO libspdk_bdev.so.16.0 00:03:54.977 LIB libspdk_blobfs.a 00:03:54.977 SO libspdk_blobfs.so.10.0 00:03:55.244 SYMLINK libspdk_bdev.so 00:03:55.244 SYMLINK libspdk_blobfs.so 00:03:55.244 LIB libspdk_lvol.a 00:03:55.244 SO libspdk_lvol.so.10.0 00:03:55.244 SYMLINK libspdk_lvol.so 00:03:55.244 CC lib/nbd/nbd.o 00:03:55.244 CC lib/scsi/dev.o 00:03:55.244 CC lib/ublk/ublk.o 00:03:55.244 CC lib/nbd/nbd_rpc.o 00:03:55.244 CC lib/ublk/ublk_rpc.o 00:03:55.244 CC lib/scsi/lun.o 00:03:55.244 CC lib/ftl/ftl_core.o 00:03:55.244 CC lib/nvmf/ctrlr.o 00:03:55.244 CC lib/scsi/port.o 00:03:55.244 CC lib/ftl/ftl_init.o 00:03:55.244 CC lib/nvmf/ctrlr_discovery.o 00:03:55.244 CC lib/scsi/scsi.o 00:03:55.244 CC lib/ftl/ftl_layout.o 00:03:55.244 CC lib/nvmf/ctrlr_bdev.o 00:03:55.244 CC lib/scsi/scsi_bdev.o 00:03:55.244 CC lib/ftl/ftl_debug.o 00:03:55.244 CC lib/nvmf/subsystem.o 00:03:55.244 CC lib/scsi/scsi_pr.o 00:03:55.244 CC lib/ftl/ftl_io.o 00:03:55.244 CC lib/scsi/scsi_rpc.o 00:03:55.244 CC lib/nvmf/nvmf.o 00:03:55.244 CC lib/nvmf/nvmf_rpc.o 00:03:55.244 CC lib/ftl/ftl_sb.o 00:03:55.244 CC lib/scsi/task.o 00:03:55.244 CC lib/nvmf/transport.o 00:03:55.244 CC lib/ftl/ftl_l2p.o 00:03:55.244 CC lib/ftl/ftl_l2p_flat.o 00:03:55.244 CC lib/nvmf/tcp.o 00:03:55.244 CC lib/ftl/ftl_nv_cache.o 00:03:55.244 CC lib/nvmf/stubs.o 00:03:55.244 CC lib/ftl/ftl_band.o 00:03:55.244 CC lib/ftl/ftl_band_ops.o 00:03:55.244 CC lib/nvmf/mdns_server.o 00:03:55.244 CC lib/ftl/ftl_writer.o 00:03:55.244 CC lib/nvmf/vfio_user.o 00:03:55.244 CC lib/ftl/ftl_rq.o 00:03:55.244 CC lib/nvmf/rdma.o 00:03:55.244 CC lib/ftl/ftl_reloc.o 00:03:55.244 CC lib/nvmf/auth.o 00:03:55.244 CC lib/ftl/ftl_l2p_cache.o 00:03:55.244 CC lib/ftl/ftl_p2l.o 00:03:55.244 CC lib/ftl/mngt/ftl_mngt.o 00:03:55.244 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:55.244 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:55.244 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:55.244 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:55.244 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:55.244 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:55.818 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:55.818 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:55.818 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:55.818 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:55.818 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:55.818 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:55.818 CC lib/ftl/utils/ftl_conf.o 00:03:55.818 CC lib/ftl/utils/ftl_md.o 00:03:55.818 CC lib/ftl/utils/ftl_mempool.o 00:03:55.818 CC lib/ftl/utils/ftl_bitmap.o 00:03:55.818 CC lib/ftl/utils/ftl_property.o 00:03:55.818 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:55.818 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:55.818 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:55.818 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:55.818 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:55.818 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:55.818 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:55.818 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:55.818 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:55.818 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:55.818 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:56.076 CC lib/ftl/base/ftl_base_dev.o 00:03:56.076 CC lib/ftl/base/ftl_base_bdev.o 00:03:56.076 CC lib/ftl/ftl_trace.o 00:03:56.076 LIB libspdk_nbd.a 00:03:56.076 SO libspdk_nbd.so.7.0 00:03:56.076 SYMLINK libspdk_nbd.so 00:03:56.076 LIB libspdk_scsi.a 00:03:56.334 SO libspdk_scsi.so.9.0 00:03:56.334 SYMLINK libspdk_scsi.so 00:03:56.334 LIB libspdk_ublk.a 00:03:56.334 SO libspdk_ublk.so.3.0 00:03:56.593 SYMLINK libspdk_ublk.so 00:03:56.593 CC lib/vhost/vhost.o 00:03:56.593 CC lib/vhost/vhost_rpc.o 00:03:56.593 CC lib/vhost/vhost_scsi.o 00:03:56.593 CC lib/vhost/vhost_blk.o 00:03:56.593 CC lib/vhost/rte_vhost_user.o 00:03:56.593 CC lib/iscsi/conn.o 00:03:56.593 CC lib/iscsi/init_grp.o 00:03:56.593 CC lib/iscsi/iscsi.o 00:03:56.593 CC lib/iscsi/md5.o 00:03:56.593 CC lib/iscsi/param.o 00:03:56.593 CC lib/iscsi/portal_grp.o 00:03:56.593 CC lib/iscsi/tgt_node.o 00:03:56.593 CC lib/iscsi/iscsi_subsystem.o 00:03:56.593 CC lib/iscsi/iscsi_rpc.o 00:03:56.593 CC lib/iscsi/task.o 00:03:56.851 LIB libspdk_ftl.a 00:03:56.851 SO libspdk_ftl.so.9.0 00:03:57.418 SYMLINK libspdk_ftl.so 00:03:57.676 LIB libspdk_vhost.a 00:03:57.676 SO libspdk_vhost.so.8.0 00:03:57.934 LIB libspdk_nvmf.a 00:03:57.934 SYMLINK libspdk_vhost.so 00:03:57.934 SO libspdk_nvmf.so.19.0 00:03:57.934 LIB libspdk_iscsi.a 00:03:57.934 SO libspdk_iscsi.so.8.0 00:03:58.192 SYMLINK libspdk_nvmf.so 00:03:58.192 SYMLINK libspdk_iscsi.so 00:03:58.451 CC module/env_dpdk/env_dpdk_rpc.o 00:03:58.451 CC module/vfu_device/vfu_virtio.o 00:03:58.451 CC module/vfu_device/vfu_virtio_blk.o 00:03:58.451 CC module/vfu_device/vfu_virtio_scsi.o 00:03:58.451 CC module/vfu_device/vfu_virtio_rpc.o 00:03:58.451 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:58.451 CC module/accel/dsa/accel_dsa.o 00:03:58.451 CC module/sock/posix/posix.o 00:03:58.451 CC module/accel/ioat/accel_ioat.o 00:03:58.451 CC module/accel/error/accel_error.o 00:03:58.451 CC module/scheduler/gscheduler/gscheduler.o 00:03:58.451 CC module/accel/dsa/accel_dsa_rpc.o 00:03:58.451 CC module/keyring/linux/keyring.o 00:03:58.451 CC module/accel/iaa/accel_iaa.o 00:03:58.451 CC module/accel/ioat/accel_ioat_rpc.o 00:03:58.451 CC module/accel/error/accel_error_rpc.o 00:03:58.451 CC module/keyring/linux/keyring_rpc.o 00:03:58.451 CC module/accel/iaa/accel_iaa_rpc.o 00:03:58.451 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:58.451 CC module/keyring/file/keyring.o 00:03:58.451 CC module/blob/bdev/blob_bdev.o 00:03:58.451 CC module/keyring/file/keyring_rpc.o 00:03:58.451 LIB libspdk_env_dpdk_rpc.a 00:03:58.709 SO libspdk_env_dpdk_rpc.so.6.0 00:03:58.709 SYMLINK libspdk_env_dpdk_rpc.so 00:03:58.709 LIB libspdk_keyring_linux.a 00:03:58.709 LIB libspdk_scheduler_gscheduler.a 00:03:58.709 LIB libspdk_scheduler_dpdk_governor.a 00:03:58.709 SO libspdk_keyring_linux.so.1.0 00:03:58.709 LIB libspdk_keyring_file.a 00:03:58.709 SO libspdk_scheduler_gscheduler.so.4.0 00:03:58.709 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:58.709 LIB libspdk_accel_error.a 00:03:58.709 LIB libspdk_accel_ioat.a 00:03:58.709 LIB libspdk_scheduler_dynamic.a 00:03:58.709 SO libspdk_keyring_file.so.1.0 00:03:58.709 LIB libspdk_accel_iaa.a 00:03:58.709 SO libspdk_accel_error.so.2.0 00:03:58.709 SO libspdk_accel_ioat.so.6.0 00:03:58.709 SO libspdk_scheduler_dynamic.so.4.0 00:03:58.709 SYMLINK libspdk_keyring_linux.so 00:03:58.709 SYMLINK libspdk_scheduler_gscheduler.so 00:03:58.709 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:58.709 SO libspdk_accel_iaa.so.3.0 00:03:58.709 SYMLINK libspdk_keyring_file.so 00:03:58.709 LIB libspdk_accel_dsa.a 00:03:58.709 LIB libspdk_blob_bdev.a 00:03:58.709 SYMLINK libspdk_accel_error.so 00:03:58.710 SYMLINK libspdk_scheduler_dynamic.so 00:03:58.710 SYMLINK libspdk_accel_ioat.so 00:03:58.710 SO libspdk_blob_bdev.so.11.0 00:03:58.710 SO libspdk_accel_dsa.so.5.0 00:03:58.710 SYMLINK libspdk_accel_iaa.so 00:03:58.968 SYMLINK libspdk_blob_bdev.so 00:03:58.968 SYMLINK libspdk_accel_dsa.so 00:03:58.968 LIB libspdk_vfu_device.a 00:03:58.968 SO libspdk_vfu_device.so.3.0 00:03:59.229 CC module/bdev/null/bdev_null.o 00:03:59.229 CC module/bdev/malloc/bdev_malloc.o 00:03:59.229 CC module/blobfs/bdev/blobfs_bdev.o 00:03:59.229 CC module/bdev/gpt/vbdev_gpt.o 00:03:59.229 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:59.229 CC module/bdev/null/bdev_null_rpc.o 00:03:59.229 CC module/bdev/gpt/gpt.o 00:03:59.229 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:59.229 CC module/bdev/aio/bdev_aio.o 00:03:59.229 CC module/bdev/lvol/vbdev_lvol.o 00:03:59.229 CC module/bdev/aio/bdev_aio_rpc.o 00:03:59.229 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:59.229 CC module/bdev/error/vbdev_error.o 00:03:59.229 CC module/bdev/nvme/bdev_nvme.o 00:03:59.229 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:59.229 CC module/bdev/delay/vbdev_delay.o 00:03:59.229 CC module/bdev/nvme/nvme_rpc.o 00:03:59.229 CC module/bdev/split/vbdev_split.o 00:03:59.229 CC module/bdev/raid/bdev_raid.o 00:03:59.229 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:59.229 CC module/bdev/nvme/bdev_mdns_client.o 00:03:59.229 CC module/bdev/error/vbdev_error_rpc.o 00:03:59.229 CC module/bdev/raid/bdev_raid_rpc.o 00:03:59.229 CC module/bdev/passthru/vbdev_passthru.o 00:03:59.229 CC module/bdev/split/vbdev_split_rpc.o 00:03:59.229 CC module/bdev/raid/bdev_raid_sb.o 00:03:59.229 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:59.229 CC module/bdev/nvme/vbdev_opal.o 00:03:59.229 CC module/bdev/iscsi/bdev_iscsi.o 00:03:59.229 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:59.229 CC module/bdev/raid/raid0.o 00:03:59.229 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:59.229 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:59.229 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:59.229 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:59.229 CC module/bdev/raid/raid1.o 00:03:59.229 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:59.229 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:59.229 CC module/bdev/ftl/bdev_ftl.o 00:03:59.229 CC module/bdev/raid/concat.o 00:03:59.229 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:59.229 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:59.229 SYMLINK libspdk_vfu_device.so 00:03:59.487 LIB libspdk_sock_posix.a 00:03:59.487 SO libspdk_sock_posix.so.6.0 00:03:59.487 LIB libspdk_blobfs_bdev.a 00:03:59.487 SO libspdk_blobfs_bdev.so.6.0 00:03:59.487 LIB libspdk_bdev_split.a 00:03:59.487 SYMLINK libspdk_sock_posix.so 00:03:59.487 LIB libspdk_bdev_gpt.a 00:03:59.487 SYMLINK libspdk_blobfs_bdev.so 00:03:59.487 SO libspdk_bdev_split.so.6.0 00:03:59.487 SO libspdk_bdev_gpt.so.6.0 00:03:59.487 LIB libspdk_bdev_passthru.a 00:03:59.487 LIB libspdk_bdev_error.a 00:03:59.745 SO libspdk_bdev_passthru.so.6.0 00:03:59.745 LIB libspdk_bdev_null.a 00:03:59.745 SO libspdk_bdev_error.so.6.0 00:03:59.745 SYMLINK libspdk_bdev_split.so 00:03:59.745 SYMLINK libspdk_bdev_gpt.so 00:03:59.745 LIB libspdk_bdev_ftl.a 00:03:59.745 SO libspdk_bdev_null.so.6.0 00:03:59.745 LIB libspdk_bdev_iscsi.a 00:03:59.745 SO libspdk_bdev_ftl.so.6.0 00:03:59.745 SYMLINK libspdk_bdev_passthru.so 00:03:59.745 SO libspdk_bdev_iscsi.so.6.0 00:03:59.745 SYMLINK libspdk_bdev_error.so 00:03:59.745 SYMLINK libspdk_bdev_null.so 00:03:59.745 LIB libspdk_bdev_delay.a 00:03:59.745 LIB libspdk_bdev_aio.a 00:03:59.745 SYMLINK libspdk_bdev_ftl.so 00:03:59.745 LIB libspdk_bdev_zone_block.a 00:03:59.745 SYMLINK libspdk_bdev_iscsi.so 00:03:59.745 SO libspdk_bdev_aio.so.6.0 00:03:59.745 SO libspdk_bdev_delay.so.6.0 00:03:59.745 LIB libspdk_bdev_malloc.a 00:03:59.745 SO libspdk_bdev_zone_block.so.6.0 00:03:59.745 SO libspdk_bdev_malloc.so.6.0 00:03:59.745 SYMLINK libspdk_bdev_aio.so 00:03:59.745 SYMLINK libspdk_bdev_delay.so 00:03:59.745 SYMLINK libspdk_bdev_zone_block.so 00:03:59.745 SYMLINK libspdk_bdev_malloc.so 00:04:00.004 LIB libspdk_bdev_lvol.a 00:04:00.004 LIB libspdk_bdev_virtio.a 00:04:00.004 SO libspdk_bdev_lvol.so.6.0 00:04:00.004 SO libspdk_bdev_virtio.so.6.0 00:04:00.004 SYMLINK libspdk_bdev_lvol.so 00:04:00.004 SYMLINK libspdk_bdev_virtio.so 00:04:00.262 LIB libspdk_bdev_raid.a 00:04:00.521 SO libspdk_bdev_raid.so.6.0 00:04:00.521 SYMLINK libspdk_bdev_raid.so 00:04:01.455 LIB libspdk_bdev_nvme.a 00:04:01.455 SO libspdk_bdev_nvme.so.7.0 00:04:01.713 SYMLINK libspdk_bdev_nvme.so 00:04:01.971 CC module/event/subsystems/keyring/keyring.o 00:04:01.971 CC module/event/subsystems/vmd/vmd.o 00:04:01.971 CC module/event/subsystems/iobuf/iobuf.o 00:04:01.971 CC module/event/subsystems/sock/sock.o 00:04:01.971 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:01.971 CC module/event/subsystems/scheduler/scheduler.o 00:04:01.971 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:01.971 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:01.971 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:02.230 LIB libspdk_event_keyring.a 00:04:02.230 LIB libspdk_event_vhost_blk.a 00:04:02.230 LIB libspdk_event_vfu_tgt.a 00:04:02.230 LIB libspdk_event_sock.a 00:04:02.230 LIB libspdk_event_scheduler.a 00:04:02.230 LIB libspdk_event_vmd.a 00:04:02.230 LIB libspdk_event_iobuf.a 00:04:02.230 SO libspdk_event_keyring.so.1.0 00:04:02.230 SO libspdk_event_vhost_blk.so.3.0 00:04:02.230 SO libspdk_event_sock.so.5.0 00:04:02.230 SO libspdk_event_scheduler.so.4.0 00:04:02.230 SO libspdk_event_vfu_tgt.so.3.0 00:04:02.230 SO libspdk_event_vmd.so.6.0 00:04:02.230 SO libspdk_event_iobuf.so.3.0 00:04:02.230 SYMLINK libspdk_event_keyring.so 00:04:02.230 SYMLINK libspdk_event_vhost_blk.so 00:04:02.230 SYMLINK libspdk_event_sock.so 00:04:02.230 SYMLINK libspdk_event_vfu_tgt.so 00:04:02.230 SYMLINK libspdk_event_scheduler.so 00:04:02.230 SYMLINK libspdk_event_vmd.so 00:04:02.230 SYMLINK libspdk_event_iobuf.so 00:04:02.487 CC module/event/subsystems/accel/accel.o 00:04:02.487 LIB libspdk_event_accel.a 00:04:02.487 SO libspdk_event_accel.so.6.0 00:04:02.746 SYMLINK libspdk_event_accel.so 00:04:02.746 CC module/event/subsystems/bdev/bdev.o 00:04:03.004 LIB libspdk_event_bdev.a 00:04:03.004 SO libspdk_event_bdev.so.6.0 00:04:03.004 SYMLINK libspdk_event_bdev.so 00:04:03.262 CC module/event/subsystems/scsi/scsi.o 00:04:03.262 CC module/event/subsystems/ublk/ublk.o 00:04:03.262 CC module/event/subsystems/nbd/nbd.o 00:04:03.262 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:03.262 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:03.262 LIB libspdk_event_nbd.a 00:04:03.262 LIB libspdk_event_ublk.a 00:04:03.262 LIB libspdk_event_scsi.a 00:04:03.262 SO libspdk_event_ublk.so.3.0 00:04:03.262 SO libspdk_event_nbd.so.6.0 00:04:03.519 SO libspdk_event_scsi.so.6.0 00:04:03.519 SYMLINK libspdk_event_ublk.so 00:04:03.519 SYMLINK libspdk_event_nbd.so 00:04:03.519 SYMLINK libspdk_event_scsi.so 00:04:03.519 LIB libspdk_event_nvmf.a 00:04:03.519 SO libspdk_event_nvmf.so.6.0 00:04:03.519 SYMLINK libspdk_event_nvmf.so 00:04:03.519 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:03.519 CC module/event/subsystems/iscsi/iscsi.o 00:04:03.777 LIB libspdk_event_vhost_scsi.a 00:04:03.777 LIB libspdk_event_iscsi.a 00:04:03.777 SO libspdk_event_vhost_scsi.so.3.0 00:04:03.777 SO libspdk_event_iscsi.so.6.0 00:04:03.777 SYMLINK libspdk_event_vhost_scsi.so 00:04:03.777 SYMLINK libspdk_event_iscsi.so 00:04:04.036 SO libspdk.so.6.0 00:04:04.036 SYMLINK libspdk.so 00:04:04.036 CXX app/trace/trace.o 00:04:04.036 CC app/trace_record/trace_record.o 00:04:04.036 CC app/spdk_lspci/spdk_lspci.o 00:04:04.036 CC app/spdk_top/spdk_top.o 00:04:04.036 TEST_HEADER include/spdk/accel.h 00:04:04.036 CC app/spdk_nvme_perf/perf.o 00:04:04.036 TEST_HEADER include/spdk/assert.h 00:04:04.036 CC test/rpc_client/rpc_client_test.o 00:04:04.036 CC app/spdk_nvme_discover/discovery_aer.o 00:04:04.036 TEST_HEADER include/spdk/accel_module.h 00:04:04.036 CC app/spdk_nvme_identify/identify.o 00:04:04.036 TEST_HEADER include/spdk/base64.h 00:04:04.036 TEST_HEADER include/spdk/barrier.h 00:04:04.036 TEST_HEADER include/spdk/bdev.h 00:04:04.036 TEST_HEADER include/spdk/bdev_module.h 00:04:04.036 TEST_HEADER include/spdk/bdev_zone.h 00:04:04.036 TEST_HEADER include/spdk/bit_array.h 00:04:04.299 TEST_HEADER include/spdk/bit_pool.h 00:04:04.299 TEST_HEADER include/spdk/blob_bdev.h 00:04:04.299 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:04.299 TEST_HEADER include/spdk/blobfs.h 00:04:04.299 TEST_HEADER include/spdk/blob.h 00:04:04.299 TEST_HEADER include/spdk/conf.h 00:04:04.299 TEST_HEADER include/spdk/config.h 00:04:04.299 TEST_HEADER include/spdk/cpuset.h 00:04:04.299 TEST_HEADER include/spdk/crc16.h 00:04:04.299 TEST_HEADER include/spdk/crc32.h 00:04:04.299 TEST_HEADER include/spdk/crc64.h 00:04:04.299 TEST_HEADER include/spdk/dif.h 00:04:04.299 TEST_HEADER include/spdk/dma.h 00:04:04.299 TEST_HEADER include/spdk/endian.h 00:04:04.299 TEST_HEADER include/spdk/env_dpdk.h 00:04:04.299 TEST_HEADER include/spdk/env.h 00:04:04.299 TEST_HEADER include/spdk/fd_group.h 00:04:04.299 TEST_HEADER include/spdk/event.h 00:04:04.299 TEST_HEADER include/spdk/fd.h 00:04:04.299 TEST_HEADER include/spdk/file.h 00:04:04.299 TEST_HEADER include/spdk/ftl.h 00:04:04.299 TEST_HEADER include/spdk/gpt_spec.h 00:04:04.299 TEST_HEADER include/spdk/hexlify.h 00:04:04.299 TEST_HEADER include/spdk/idxd.h 00:04:04.299 TEST_HEADER include/spdk/histogram_data.h 00:04:04.299 TEST_HEADER include/spdk/idxd_spec.h 00:04:04.299 TEST_HEADER include/spdk/init.h 00:04:04.299 TEST_HEADER include/spdk/ioat_spec.h 00:04:04.299 TEST_HEADER include/spdk/ioat.h 00:04:04.299 TEST_HEADER include/spdk/iscsi_spec.h 00:04:04.299 TEST_HEADER include/spdk/json.h 00:04:04.299 TEST_HEADER include/spdk/jsonrpc.h 00:04:04.299 TEST_HEADER include/spdk/keyring.h 00:04:04.299 TEST_HEADER include/spdk/keyring_module.h 00:04:04.299 TEST_HEADER include/spdk/likely.h 00:04:04.299 TEST_HEADER include/spdk/log.h 00:04:04.299 TEST_HEADER include/spdk/lvol.h 00:04:04.299 TEST_HEADER include/spdk/mmio.h 00:04:04.299 TEST_HEADER include/spdk/memory.h 00:04:04.299 TEST_HEADER include/spdk/nbd.h 00:04:04.299 TEST_HEADER include/spdk/net.h 00:04:04.299 TEST_HEADER include/spdk/nvme.h 00:04:04.299 TEST_HEADER include/spdk/nvme_intel.h 00:04:04.299 TEST_HEADER include/spdk/notify.h 00:04:04.299 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:04.299 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:04.299 TEST_HEADER include/spdk/nvme_spec.h 00:04:04.299 TEST_HEADER include/spdk/nvme_zns.h 00:04:04.299 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:04.299 TEST_HEADER include/spdk/nvmf.h 00:04:04.299 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:04.299 TEST_HEADER include/spdk/nvmf_spec.h 00:04:04.299 TEST_HEADER include/spdk/nvmf_transport.h 00:04:04.299 TEST_HEADER include/spdk/opal.h 00:04:04.299 TEST_HEADER include/spdk/opal_spec.h 00:04:04.299 TEST_HEADER include/spdk/pci_ids.h 00:04:04.299 TEST_HEADER include/spdk/pipe.h 00:04:04.299 TEST_HEADER include/spdk/queue.h 00:04:04.299 TEST_HEADER include/spdk/reduce.h 00:04:04.299 TEST_HEADER include/spdk/rpc.h 00:04:04.299 TEST_HEADER include/spdk/scheduler.h 00:04:04.299 TEST_HEADER include/spdk/scsi.h 00:04:04.299 TEST_HEADER include/spdk/scsi_spec.h 00:04:04.299 TEST_HEADER include/spdk/sock.h 00:04:04.299 TEST_HEADER include/spdk/stdinc.h 00:04:04.299 TEST_HEADER include/spdk/thread.h 00:04:04.299 TEST_HEADER include/spdk/string.h 00:04:04.299 TEST_HEADER include/spdk/trace.h 00:04:04.299 TEST_HEADER include/spdk/trace_parser.h 00:04:04.299 TEST_HEADER include/spdk/tree.h 00:04:04.299 TEST_HEADER include/spdk/ublk.h 00:04:04.299 TEST_HEADER include/spdk/util.h 00:04:04.299 TEST_HEADER include/spdk/uuid.h 00:04:04.299 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:04.299 TEST_HEADER include/spdk/version.h 00:04:04.299 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:04.299 TEST_HEADER include/spdk/vhost.h 00:04:04.299 TEST_HEADER include/spdk/vmd.h 00:04:04.299 TEST_HEADER include/spdk/xor.h 00:04:04.299 TEST_HEADER include/spdk/zipf.h 00:04:04.299 CXX test/cpp_headers/accel.o 00:04:04.299 CXX test/cpp_headers/accel_module.o 00:04:04.299 CXX test/cpp_headers/assert.o 00:04:04.299 CXX test/cpp_headers/barrier.o 00:04:04.299 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:04.299 CXX test/cpp_headers/base64.o 00:04:04.299 CXX test/cpp_headers/bdev.o 00:04:04.299 CXX test/cpp_headers/bdev_module.o 00:04:04.299 CXX test/cpp_headers/bdev_zone.o 00:04:04.299 CXX test/cpp_headers/bit_array.o 00:04:04.299 CXX test/cpp_headers/bit_pool.o 00:04:04.299 CC app/spdk_dd/spdk_dd.o 00:04:04.299 CXX test/cpp_headers/blob_bdev.o 00:04:04.299 CXX test/cpp_headers/blobfs_bdev.o 00:04:04.299 CXX test/cpp_headers/blobfs.o 00:04:04.299 CXX test/cpp_headers/blob.o 00:04:04.299 CC app/iscsi_tgt/iscsi_tgt.o 00:04:04.299 CXX test/cpp_headers/conf.o 00:04:04.299 CXX test/cpp_headers/config.o 00:04:04.299 CXX test/cpp_headers/cpuset.o 00:04:04.299 CXX test/cpp_headers/crc16.o 00:04:04.299 CC app/nvmf_tgt/nvmf_main.o 00:04:04.299 CXX test/cpp_headers/crc32.o 00:04:04.299 CC app/spdk_tgt/spdk_tgt.o 00:04:04.299 CC test/env/vtophys/vtophys.o 00:04:04.299 CC test/env/pci/pci_ut.o 00:04:04.299 CC test/thread/poller_perf/poller_perf.o 00:04:04.299 CC examples/ioat/verify/verify.o 00:04:04.299 CC test/env/memory/memory_ut.o 00:04:04.299 CC examples/util/zipf/zipf.o 00:04:04.299 CC test/app/histogram_perf/histogram_perf.o 00:04:04.299 CC examples/ioat/perf/perf.o 00:04:04.299 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:04.299 CC test/app/jsoncat/jsoncat.o 00:04:04.299 CC test/app/stub/stub.o 00:04:04.299 CC app/fio/nvme/fio_plugin.o 00:04:04.299 CC test/dma/test_dma/test_dma.o 00:04:04.299 CC app/fio/bdev/fio_plugin.o 00:04:04.560 CC test/app/bdev_svc/bdev_svc.o 00:04:04.560 LINK spdk_lspci 00:04:04.560 CC test/env/mem_callbacks/mem_callbacks.o 00:04:04.560 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:04.560 LINK rpc_client_test 00:04:04.560 LINK spdk_nvme_discover 00:04:04.560 CXX test/cpp_headers/crc64.o 00:04:04.560 LINK vtophys 00:04:04.560 LINK jsoncat 00:04:04.560 LINK interrupt_tgt 00:04:04.560 LINK poller_perf 00:04:04.560 CXX test/cpp_headers/dif.o 00:04:04.560 CXX test/cpp_headers/dma.o 00:04:04.560 CXX test/cpp_headers/endian.o 00:04:04.560 LINK zipf 00:04:04.560 LINK histogram_perf 00:04:04.560 CXX test/cpp_headers/env_dpdk.o 00:04:04.560 CXX test/cpp_headers/env.o 00:04:04.560 LINK spdk_trace_record 00:04:04.560 CXX test/cpp_headers/event.o 00:04:04.560 LINK env_dpdk_post_init 00:04:04.560 LINK nvmf_tgt 00:04:04.560 CXX test/cpp_headers/fd_group.o 00:04:04.829 CXX test/cpp_headers/fd.o 00:04:04.829 CXX test/cpp_headers/file.o 00:04:04.829 CXX test/cpp_headers/ftl.o 00:04:04.829 CXX test/cpp_headers/gpt_spec.o 00:04:04.829 LINK iscsi_tgt 00:04:04.829 LINK stub 00:04:04.829 CXX test/cpp_headers/hexlify.o 00:04:04.829 CXX test/cpp_headers/histogram_data.o 00:04:04.829 CXX test/cpp_headers/idxd.o 00:04:04.829 CXX test/cpp_headers/idxd_spec.o 00:04:04.829 LINK verify 00:04:04.829 LINK ioat_perf 00:04:04.829 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:04.829 LINK spdk_tgt 00:04:04.829 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:04.829 CXX test/cpp_headers/init.o 00:04:04.829 LINK bdev_svc 00:04:04.829 CXX test/cpp_headers/ioat.o 00:04:04.829 LINK mem_callbacks 00:04:04.829 CXX test/cpp_headers/ioat_spec.o 00:04:04.829 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:05.096 CXX test/cpp_headers/iscsi_spec.o 00:04:05.096 CXX test/cpp_headers/json.o 00:04:05.096 LINK spdk_dd 00:04:05.096 CXX test/cpp_headers/jsonrpc.o 00:04:05.096 CXX test/cpp_headers/keyring.o 00:04:05.096 LINK spdk_trace 00:04:05.096 CXX test/cpp_headers/keyring_module.o 00:04:05.096 CXX test/cpp_headers/likely.o 00:04:05.096 CXX test/cpp_headers/log.o 00:04:05.096 CXX test/cpp_headers/lvol.o 00:04:05.096 CXX test/cpp_headers/memory.o 00:04:05.096 LINK pci_ut 00:04:05.096 CXX test/cpp_headers/nbd.o 00:04:05.096 CXX test/cpp_headers/mmio.o 00:04:05.096 CXX test/cpp_headers/net.o 00:04:05.096 CXX test/cpp_headers/notify.o 00:04:05.096 CXX test/cpp_headers/nvme.o 00:04:05.096 CXX test/cpp_headers/nvme_intel.o 00:04:05.096 CXX test/cpp_headers/nvme_ocssd.o 00:04:05.096 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:05.096 CXX test/cpp_headers/nvme_spec.o 00:04:05.096 CXX test/cpp_headers/nvme_zns.o 00:04:05.096 CXX test/cpp_headers/nvmf_cmd.o 00:04:05.096 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:05.096 CXX test/cpp_headers/nvmf.o 00:04:05.096 CXX test/cpp_headers/nvmf_spec.o 00:04:05.096 CXX test/cpp_headers/nvmf_transport.o 00:04:05.096 LINK test_dma 00:04:05.096 CXX test/cpp_headers/opal.o 00:04:05.383 CXX test/cpp_headers/opal_spec.o 00:04:05.383 CC test/event/reactor/reactor.o 00:04:05.383 CC test/event/event_perf/event_perf.o 00:04:05.383 CXX test/cpp_headers/pci_ids.o 00:04:05.383 CXX test/cpp_headers/pipe.o 00:04:05.383 LINK nvme_fuzz 00:04:05.383 CXX test/cpp_headers/queue.o 00:04:05.383 CC test/event/reactor_perf/reactor_perf.o 00:04:05.383 CC examples/vmd/lsvmd/lsvmd.o 00:04:05.383 CC examples/sock/hello_world/hello_sock.o 00:04:05.383 CC examples/thread/thread/thread_ex.o 00:04:05.383 LINK spdk_nvme 00:04:05.383 CC examples/vmd/led/led.o 00:04:05.383 CC examples/idxd/perf/perf.o 00:04:05.383 CXX test/cpp_headers/reduce.o 00:04:05.383 LINK spdk_bdev 00:04:05.383 CXX test/cpp_headers/rpc.o 00:04:05.383 CXX test/cpp_headers/scheduler.o 00:04:05.383 CXX test/cpp_headers/scsi.o 00:04:05.383 CC test/event/app_repeat/app_repeat.o 00:04:05.383 CXX test/cpp_headers/scsi_spec.o 00:04:05.383 CXX test/cpp_headers/sock.o 00:04:05.383 CXX test/cpp_headers/stdinc.o 00:04:05.383 CXX test/cpp_headers/string.o 00:04:05.383 CXX test/cpp_headers/thread.o 00:04:05.643 CXX test/cpp_headers/trace.o 00:04:05.643 CC test/event/scheduler/scheduler.o 00:04:05.643 CXX test/cpp_headers/trace_parser.o 00:04:05.643 CXX test/cpp_headers/tree.o 00:04:05.643 CXX test/cpp_headers/ublk.o 00:04:05.643 CXX test/cpp_headers/util.o 00:04:05.643 CXX test/cpp_headers/uuid.o 00:04:05.643 CXX test/cpp_headers/version.o 00:04:05.643 CXX test/cpp_headers/vfio_user_pci.o 00:04:05.643 CXX test/cpp_headers/vfio_user_spec.o 00:04:05.643 CXX test/cpp_headers/vhost.o 00:04:05.643 LINK reactor 00:04:05.643 CXX test/cpp_headers/vmd.o 00:04:05.643 CC app/vhost/vhost.o 00:04:05.643 CXX test/cpp_headers/xor.o 00:04:05.643 CXX test/cpp_headers/zipf.o 00:04:05.643 LINK event_perf 00:04:05.643 LINK spdk_nvme_perf 00:04:05.643 LINK reactor_perf 00:04:05.643 LINK lsvmd 00:04:05.643 LINK led 00:04:05.643 LINK vhost_fuzz 00:04:05.643 LINK memory_ut 00:04:05.904 LINK app_repeat 00:04:05.904 LINK spdk_top 00:04:05.904 LINK spdk_nvme_identify 00:04:05.904 LINK thread 00:04:05.904 LINK hello_sock 00:04:05.904 CC test/nvme/aer/aer.o 00:04:05.904 CC test/nvme/sgl/sgl.o 00:04:05.904 CC test/nvme/reset/reset.o 00:04:05.904 CC test/nvme/overhead/overhead.o 00:04:05.904 CC test/nvme/e2edp/nvme_dp.o 00:04:05.904 CC test/nvme/err_injection/err_injection.o 00:04:05.904 CC test/nvme/connect_stress/connect_stress.o 00:04:05.904 CC test/nvme/startup/startup.o 00:04:05.904 CC test/nvme/reserve/reserve.o 00:04:05.904 CC test/nvme/simple_copy/simple_copy.o 00:04:05.904 CC test/accel/dif/dif.o 00:04:05.904 CC test/blobfs/mkfs/mkfs.o 00:04:05.904 CC test/nvme/boot_partition/boot_partition.o 00:04:05.904 LINK scheduler 00:04:05.904 CC test/nvme/fused_ordering/fused_ordering.o 00:04:05.904 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:05.904 CC test/nvme/fdp/fdp.o 00:04:05.904 CC test/nvme/compliance/nvme_compliance.o 00:04:05.904 LINK vhost 00:04:05.904 CC test/lvol/esnap/esnap.o 00:04:05.904 LINK idxd_perf 00:04:05.904 CC test/nvme/cuse/cuse.o 00:04:06.162 LINK boot_partition 00:04:06.162 LINK connect_stress 00:04:06.162 LINK startup 00:04:06.162 LINK doorbell_aers 00:04:06.162 LINK mkfs 00:04:06.162 LINK err_injection 00:04:06.162 LINK reset 00:04:06.162 LINK reserve 00:04:06.162 LINK fused_ordering 00:04:06.162 LINK aer 00:04:06.162 LINK sgl 00:04:06.420 LINK simple_copy 00:04:06.420 LINK overhead 00:04:06.420 CC examples/nvme/reconnect/reconnect.o 00:04:06.420 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:06.420 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:06.420 CC examples/nvme/hello_world/hello_world.o 00:04:06.420 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:06.420 CC examples/nvme/hotplug/hotplug.o 00:04:06.420 CC examples/nvme/abort/abort.o 00:04:06.420 CC examples/nvme/arbitration/arbitration.o 00:04:06.420 CC examples/accel/perf/accel_perf.o 00:04:06.420 LINK fdp 00:04:06.420 LINK nvme_dp 00:04:06.420 LINK dif 00:04:06.420 CC examples/blob/cli/blobcli.o 00:04:06.420 CC examples/blob/hello_world/hello_blob.o 00:04:06.420 LINK nvme_compliance 00:04:06.420 LINK pmr_persistence 00:04:06.678 LINK cmb_copy 00:04:06.678 LINK hello_world 00:04:06.678 LINK hotplug 00:04:06.678 LINK arbitration 00:04:06.678 LINK reconnect 00:04:06.678 LINK hello_blob 00:04:06.936 LINK abort 00:04:06.936 CC test/bdev/bdevio/bdevio.o 00:04:06.936 LINK nvme_manage 00:04:06.936 LINK accel_perf 00:04:06.936 LINK blobcli 00:04:07.195 LINK iscsi_fuzz 00:04:07.195 CC examples/bdev/hello_world/hello_bdev.o 00:04:07.195 CC examples/bdev/bdevperf/bdevperf.o 00:04:07.195 LINK bdevio 00:04:07.453 LINK cuse 00:04:07.453 LINK hello_bdev 00:04:08.019 LINK bdevperf 00:04:08.582 CC examples/nvmf/nvmf/nvmf.o 00:04:08.839 LINK nvmf 00:04:11.401 LINK esnap 00:04:11.401 00:04:11.401 real 0m41.378s 00:04:11.401 user 7m21.304s 00:04:11.401 sys 1m48.001s 00:04:11.401 00:46:41 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:11.401 00:46:41 make -- common/autotest_common.sh@10 -- $ set +x 00:04:11.401 ************************************ 00:04:11.401 END TEST make 00:04:11.401 ************************************ 00:04:11.401 00:46:41 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:11.401 00:46:41 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:11.401 00:46:41 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:11.401 00:46:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:11.401 00:46:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:11.401 00:46:41 -- pm/common@44 -- $ pid=1590525 00:04:11.401 00:46:41 -- pm/common@50 -- $ kill -TERM 1590525 00:04:11.401 00:46:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:11.401 00:46:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:11.401 00:46:41 -- pm/common@44 -- $ pid=1590527 00:04:11.401 00:46:41 -- pm/common@50 -- $ kill -TERM 1590527 00:04:11.401 00:46:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:11.401 00:46:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:11.401 00:46:41 -- pm/common@44 -- $ pid=1590529 00:04:11.401 00:46:41 -- pm/common@50 -- $ kill -TERM 1590529 00:04:11.401 00:46:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:11.401 00:46:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:11.401 00:46:41 -- pm/common@44 -- $ pid=1590557 00:04:11.401 00:46:41 -- pm/common@50 -- $ sudo -E kill -TERM 1590557 00:04:11.401 00:46:41 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:11.401 00:46:41 -- nvmf/common.sh@7 -- # uname -s 00:04:11.401 00:46:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:11.401 00:46:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:11.401 00:46:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:11.401 00:46:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:11.401 00:46:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:11.401 00:46:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:11.401 00:46:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:11.401 00:46:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:11.401 00:46:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:11.401 00:46:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:11.401 00:46:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:11.401 00:46:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:11.401 00:46:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:11.401 00:46:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:11.401 00:46:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:11.401 00:46:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:11.401 00:46:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:11.401 00:46:41 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:11.401 00:46:41 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:11.401 00:46:41 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:11.401 00:46:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.401 00:46:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.401 00:46:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.401 00:46:41 -- paths/export.sh@5 -- # export PATH 00:04:11.401 00:46:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.401 00:46:41 -- nvmf/common.sh@47 -- # : 0 00:04:11.401 00:46:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:11.401 00:46:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:11.401 00:46:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:11.401 00:46:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:11.401 00:46:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:11.401 00:46:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:11.401 00:46:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:11.401 00:46:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:11.401 00:46:41 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:11.401 00:46:41 -- spdk/autotest.sh@32 -- # uname -s 00:04:11.401 00:46:41 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:11.401 00:46:41 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:11.401 00:46:41 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:11.401 00:46:41 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:11.401 00:46:41 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:11.401 00:46:41 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:11.401 00:46:41 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:11.401 00:46:41 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:11.401 00:46:41 -- spdk/autotest.sh@48 -- # udevadm_pid=1666954 00:04:11.401 00:46:41 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:11.401 00:46:41 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:11.401 00:46:41 -- pm/common@17 -- # local monitor 00:04:11.401 00:46:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:11.401 00:46:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:11.401 00:46:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:11.401 00:46:41 -- pm/common@21 -- # date +%s 00:04:11.401 00:46:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:11.401 00:46:41 -- pm/common@21 -- # date +%s 00:04:11.401 00:46:41 -- pm/common@25 -- # sleep 1 00:04:11.401 00:46:41 -- pm/common@21 -- # date +%s 00:04:11.401 00:46:41 -- pm/common@21 -- # date +%s 00:04:11.401 00:46:41 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721947601 00:04:11.402 00:46:41 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721947601 00:04:11.402 00:46:41 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721947601 00:04:11.402 00:46:41 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721947601 00:04:11.402 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721947601_collect-vmstat.pm.log 00:04:11.402 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721947601_collect-cpu-load.pm.log 00:04:11.402 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721947601_collect-cpu-temp.pm.log 00:04:11.402 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721947601_collect-bmc-pm.bmc.pm.log 00:04:12.779 00:46:42 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:12.779 00:46:42 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:12.779 00:46:42 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:12.779 00:46:42 -- common/autotest_common.sh@10 -- # set +x 00:04:12.779 00:46:42 -- spdk/autotest.sh@59 -- # create_test_list 00:04:12.779 00:46:42 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:12.779 00:46:42 -- common/autotest_common.sh@10 -- # set +x 00:04:12.779 00:46:42 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:12.779 00:46:42 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:12.779 00:46:42 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:12.779 00:46:42 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:12.779 00:46:42 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:12.779 00:46:42 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:12.779 00:46:42 -- common/autotest_common.sh@1455 -- # uname 00:04:12.779 00:46:42 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:12.779 00:46:42 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:12.779 00:46:42 -- common/autotest_common.sh@1475 -- # uname 00:04:12.779 00:46:42 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:12.779 00:46:42 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:12.779 00:46:42 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:12.779 00:46:42 -- spdk/autotest.sh@72 -- # hash lcov 00:04:12.779 00:46:42 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:12.779 00:46:42 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:12.779 --rc lcov_branch_coverage=1 00:04:12.779 --rc lcov_function_coverage=1 00:04:12.779 --rc genhtml_branch_coverage=1 00:04:12.779 --rc genhtml_function_coverage=1 00:04:12.779 --rc genhtml_legend=1 00:04:12.779 --rc geninfo_all_blocks=1 00:04:12.779 ' 00:04:12.779 00:46:42 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:12.779 --rc lcov_branch_coverage=1 00:04:12.779 --rc lcov_function_coverage=1 00:04:12.779 --rc genhtml_branch_coverage=1 00:04:12.779 --rc genhtml_function_coverage=1 00:04:12.779 --rc genhtml_legend=1 00:04:12.779 --rc geninfo_all_blocks=1 00:04:12.779 ' 00:04:12.779 00:46:42 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:12.779 --rc lcov_branch_coverage=1 00:04:12.779 --rc lcov_function_coverage=1 00:04:12.779 --rc genhtml_branch_coverage=1 00:04:12.779 --rc genhtml_function_coverage=1 00:04:12.779 --rc genhtml_legend=1 00:04:12.779 --rc geninfo_all_blocks=1 00:04:12.779 --no-external' 00:04:12.779 00:46:42 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:12.779 --rc lcov_branch_coverage=1 00:04:12.779 --rc lcov_function_coverage=1 00:04:12.779 --rc genhtml_branch_coverage=1 00:04:12.779 --rc genhtml_function_coverage=1 00:04:12.779 --rc genhtml_legend=1 00:04:12.779 --rc geninfo_all_blocks=1 00:04:12.779 --no-external' 00:04:12.779 00:46:42 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:12.779 lcov: LCOV version 1.14 00:04:12.779 00:46:42 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:30.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:30.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:43.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:43.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:43.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:43.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:43.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:43.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:43.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:43.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:43.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:43.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:43.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:43.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:43.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:43.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:43.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:43.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:43.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:43.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:43.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:43.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:43.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:43.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:43.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:43.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:43.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:43.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:43.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:43.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:43.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:43.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:43.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:43.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:43.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:43.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:43.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:43.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:43.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:43.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:43.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:43.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:43.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:43.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:43.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:43.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:43.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:43.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:43.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:43.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:43.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:43.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:43.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:43.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:43.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:43.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:43.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:43.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:43.042 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:43.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:43.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:43.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:43.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:43.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:43.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:43.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:43.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:43.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:43.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:43.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:43.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:43.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:43.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:43.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:43.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:43.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:43.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:43.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:43.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:43.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:43.043 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:46.334 00:47:16 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:46.334 00:47:16 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:46.334 00:47:16 -- common/autotest_common.sh@10 -- # set +x 00:04:46.334 00:47:16 -- spdk/autotest.sh@91 -- # rm -f 00:04:46.334 00:47:16 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:47.271 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:47.271 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:47.271 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:47.271 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:47.271 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:47.271 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:47.271 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:47.271 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:47.271 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:47.271 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:47.271 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:47.271 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:47.271 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:47.271 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:47.271 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:47.271 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:47.271 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:47.531 00:47:17 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:47.531 00:47:17 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:47.531 00:47:17 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:47.531 00:47:17 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:47.531 00:47:17 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:47.531 00:47:17 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:47.531 00:47:17 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:47.531 00:47:17 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:47.531 00:47:17 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:47.531 00:47:17 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:47.531 00:47:17 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:47.531 00:47:17 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:47.531 00:47:17 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:47.531 00:47:17 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:47.531 00:47:17 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:47.531 No valid GPT data, bailing 00:04:47.531 00:47:17 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:47.531 00:47:17 -- scripts/common.sh@391 -- # pt= 00:04:47.531 00:47:17 -- scripts/common.sh@392 -- # return 1 00:04:47.531 00:47:17 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:47.531 1+0 records in 00:04:47.531 1+0 records out 00:04:47.531 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00257193 s, 408 MB/s 00:04:47.531 00:47:17 -- spdk/autotest.sh@118 -- # sync 00:04:47.531 00:47:17 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:47.531 00:47:17 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:47.531 00:47:17 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:49.435 00:47:19 -- spdk/autotest.sh@124 -- # uname -s 00:04:49.435 00:47:19 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:49.435 00:47:19 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:49.435 00:47:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.435 00:47:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.435 00:47:19 -- common/autotest_common.sh@10 -- # set +x 00:04:49.435 ************************************ 00:04:49.435 START TEST setup.sh 00:04:49.435 ************************************ 00:04:49.435 00:47:19 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:49.435 * Looking for test storage... 00:04:49.435 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:49.435 00:47:19 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:49.435 00:47:19 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:49.435 00:47:19 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:49.435 00:47:19 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.435 00:47:19 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.435 00:47:19 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:49.435 ************************************ 00:04:49.435 START TEST acl 00:04:49.435 ************************************ 00:04:49.435 00:47:19 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:49.435 * Looking for test storage... 00:04:49.435 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:49.435 00:47:19 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:49.435 00:47:19 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:49.435 00:47:19 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:49.435 00:47:19 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:49.435 00:47:19 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:49.435 00:47:19 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:49.435 00:47:19 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:49.435 00:47:19 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:49.435 00:47:19 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:49.435 00:47:19 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:49.435 00:47:19 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:49.435 00:47:19 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:49.435 00:47:19 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:49.435 00:47:19 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:49.436 00:47:19 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:49.436 00:47:19 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:50.813 00:47:20 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:50.813 00:47:20 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:50.813 00:47:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.813 00:47:20 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:50.813 00:47:20 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.813 00:47:20 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:51.758 Hugepages 00:04:51.758 node hugesize free / total 00:04:51.758 00:47:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:51.758 00:47:22 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:51.758 00:47:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:51.758 00:47:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:51.758 00:47:22 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:51.758 00:47:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:51.758 00:47:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:51.758 00:47:22 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:51.758 00:47:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:51.758 00:04:51.758 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:51.758 00:47:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:51.758 00:47:22 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:51.758 00:47:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:51.758 00:47:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:51.759 00:47:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:52.059 00:47:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:04:52.059 00:47:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:52.059 00:47:22 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:52.059 00:47:22 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:52.059 00:47:22 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:52.059 00:47:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:52.059 00:47:22 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:52.059 00:47:22 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:52.059 00:47:22 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:52.059 00:47:22 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:52.059 00:47:22 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:52.059 ************************************ 00:04:52.059 START TEST denied 00:04:52.059 ************************************ 00:04:52.059 00:47:22 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:04:52.059 00:47:22 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:04:52.059 00:47:22 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:52.059 00:47:22 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:04:52.059 00:47:22 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.059 00:47:22 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:53.439 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:04:53.439 00:47:23 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:04:53.439 00:47:23 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:53.439 00:47:23 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:53.440 00:47:23 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:04:53.440 00:47:23 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:04:53.440 00:47:23 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:53.440 00:47:23 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:53.440 00:47:23 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:53.440 00:47:23 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:53.440 00:47:23 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:55.971 00:04:55.971 real 0m3.710s 00:04:55.971 user 0m1.159s 00:04:55.971 sys 0m1.654s 00:04:55.971 00:47:25 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:55.971 00:47:25 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:55.971 ************************************ 00:04:55.971 END TEST denied 00:04:55.971 ************************************ 00:04:55.971 00:47:25 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:55.971 00:47:25 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:55.971 00:47:25 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:55.971 00:47:25 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:55.971 ************************************ 00:04:55.971 START TEST allowed 00:04:55.971 ************************************ 00:04:55.971 00:47:25 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:04:55.971 00:47:25 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:04:55.971 00:47:25 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:55.971 00:47:25 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:04:55.971 00:47:25 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.971 00:47:25 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:58.504 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:58.504 00:47:28 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:58.504 00:47:28 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:58.504 00:47:28 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:58.504 00:47:28 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:58.504 00:47:28 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:59.884 00:04:59.884 real 0m3.884s 00:04:59.884 user 0m1.030s 00:04:59.884 sys 0m1.662s 00:04:59.884 00:47:29 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.884 00:47:29 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:59.884 ************************************ 00:04:59.884 END TEST allowed 00:04:59.884 ************************************ 00:04:59.884 00:04:59.884 real 0m10.309s 00:04:59.884 user 0m3.254s 00:04:59.885 sys 0m5.037s 00:04:59.885 00:47:29 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.885 00:47:29 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:59.885 ************************************ 00:04:59.885 END TEST acl 00:04:59.885 ************************************ 00:04:59.885 00:47:29 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:59.885 00:47:29 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.885 00:47:29 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.885 00:47:29 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:59.885 ************************************ 00:04:59.885 START TEST hugepages 00:04:59.885 ************************************ 00:04:59.885 00:47:29 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:59.885 * Looking for test storage... 00:04:59.885 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 41731944 kB' 'MemAvailable: 45220424 kB' 'Buffers: 2704 kB' 'Cached: 12257356 kB' 'SwapCached: 0 kB' 'Active: 9250384 kB' 'Inactive: 3491988 kB' 'Active(anon): 8857600 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485648 kB' 'Mapped: 172536 kB' 'Shmem: 8375288 kB' 'KReclaimable: 195096 kB' 'Slab: 562380 kB' 'SReclaimable: 195096 kB' 'SUnreclaim: 367284 kB' 'KernelStack: 12880 kB' 'PageTables: 8456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562308 kB' 'Committed_AS: 9978832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195860 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 14936064 kB' 'DirectMap1G: 52428800 kB' 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.885 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.886 00:47:30 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:59.887 00:47:30 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:59.887 00:47:30 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:59.887 00:47:30 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:59.887 00:47:30 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:59.887 00:47:30 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:59.887 00:47:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:59.887 00:47:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:59.887 00:47:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:59.887 00:47:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:59.887 00:47:30 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:59.887 00:47:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:59.887 00:47:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:59.887 00:47:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:59.887 00:47:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:59.887 00:47:30 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:59.887 00:47:30 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:59.887 00:47:30 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:59.887 00:47:30 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.887 00:47:30 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.887 00:47:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:59.887 ************************************ 00:04:59.887 START TEST default_setup 00:04:59.887 ************************************ 00:04:59.887 00:47:30 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:04:59.887 00:47:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:59.887 00:47:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:59.887 00:47:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:59.887 00:47:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:59.887 00:47:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:59.887 00:47:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:59.887 00:47:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:59.887 00:47:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:59.887 00:47:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:59.887 00:47:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:59.887 00:47:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:59.887 00:47:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:59.887 00:47:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:59.887 00:47:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:59.887 00:47:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:59.887 00:47:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:59.887 00:47:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:59.887 00:47:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:59.887 00:47:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:59.887 00:47:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:59.887 00:47:30 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.887 00:47:30 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:01.263 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:01.263 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:01.263 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:01.263 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:01.263 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:01.263 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:01.263 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:01.263 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:01.263 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:01.263 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:01.263 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:01.263 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:01.263 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:01.263 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:01.263 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:01.263 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:02.206 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43852568 kB' 'MemAvailable: 47341160 kB' 'Buffers: 2704 kB' 'Cached: 12257444 kB' 'SwapCached: 0 kB' 'Active: 9264136 kB' 'Inactive: 3491988 kB' 'Active(anon): 8871352 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499312 kB' 'Mapped: 172056 kB' 'Shmem: 8375376 kB' 'KReclaimable: 195320 kB' 'Slab: 562332 kB' 'SReclaimable: 195320 kB' 'SUnreclaim: 367012 kB' 'KernelStack: 12816 kB' 'PageTables: 8176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9993724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 14936064 kB' 'DirectMap1G: 52428800 kB' 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.206 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.207 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43853580 kB' 'MemAvailable: 47342172 kB' 'Buffers: 2704 kB' 'Cached: 12257444 kB' 'SwapCached: 0 kB' 'Active: 9263876 kB' 'Inactive: 3491988 kB' 'Active(anon): 8871092 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499076 kB' 'Mapped: 172048 kB' 'Shmem: 8375376 kB' 'KReclaimable: 195320 kB' 'Slab: 562332 kB' 'SReclaimable: 195320 kB' 'SUnreclaim: 367012 kB' 'KernelStack: 12736 kB' 'PageTables: 7684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9993744 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 14936064 kB' 'DirectMap1G: 52428800 kB' 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.208 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.209 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43854724 kB' 'MemAvailable: 47343316 kB' 'Buffers: 2704 kB' 'Cached: 12257464 kB' 'SwapCached: 0 kB' 'Active: 9263744 kB' 'Inactive: 3491988 kB' 'Active(anon): 8870960 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498916 kB' 'Mapped: 172048 kB' 'Shmem: 8375396 kB' 'KReclaimable: 195320 kB' 'Slab: 562328 kB' 'SReclaimable: 195320 kB' 'SUnreclaim: 367008 kB' 'KernelStack: 12736 kB' 'PageTables: 7872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9993764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195952 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 14936064 kB' 'DirectMap1G: 52428800 kB' 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.210 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:02.211 nr_hugepages=1024 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:02.211 resv_hugepages=0 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:02.211 surplus_hugepages=0 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:02.211 anon_hugepages=0 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:02.211 00:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43855412 kB' 'MemAvailable: 47344004 kB' 'Buffers: 2704 kB' 'Cached: 12257464 kB' 'SwapCached: 0 kB' 'Active: 9263808 kB' 'Inactive: 3491988 kB' 'Active(anon): 8871024 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498988 kB' 'Mapped: 172048 kB' 'Shmem: 8375396 kB' 'KReclaimable: 195320 kB' 'Slab: 562328 kB' 'SReclaimable: 195320 kB' 'SUnreclaim: 367008 kB' 'KernelStack: 12768 kB' 'PageTables: 7960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9993788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195952 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 14936064 kB' 'DirectMap1G: 52428800 kB' 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.212 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:02.213 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19838152 kB' 'MemUsed: 13038788 kB' 'SwapCached: 0 kB' 'Active: 6418332 kB' 'Inactive: 3354812 kB' 'Active(anon): 6146440 kB' 'Inactive(anon): 0 kB' 'Active(file): 271892 kB' 'Inactive(file): 3354812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9649404 kB' 'Mapped: 95984 kB' 'AnonPages: 126936 kB' 'Shmem: 6022700 kB' 'KernelStack: 6616 kB' 'PageTables: 3464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 93260 kB' 'Slab: 318916 kB' 'SReclaimable: 93260 kB' 'SUnreclaim: 225656 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:02.215 node0=1024 expecting 1024 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:02.215 00:05:02.215 real 0m2.467s 00:05:02.215 user 0m0.650s 00:05:02.215 sys 0m0.944s 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.215 00:47:32 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:02.215 ************************************ 00:05:02.215 END TEST default_setup 00:05:02.215 ************************************ 00:05:02.215 00:47:32 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:02.215 00:47:32 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.215 00:47:32 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.215 00:47:32 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:02.215 ************************************ 00:05:02.215 START TEST per_node_1G_alloc 00:05:02.215 ************************************ 00:05:02.215 00:47:32 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:05:02.215 00:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:02.215 00:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:05:02.215 00:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:02.215 00:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:05:02.215 00:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:02.215 00:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:05:02.215 00:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:02.215 00:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:02.215 00:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:02.215 00:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:05:02.215 00:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:05:02.215 00:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:02.215 00:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:02.215 00:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:02.215 00:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:02.215 00:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:02.215 00:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:05:02.215 00:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:02.215 00:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:02.215 00:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:02.215 00:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:02.215 00:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:02.215 00:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:02.215 00:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:05:02.215 00:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:02.215 00:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.215 00:47:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:03.592 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:03.592 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:03.592 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:03.592 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:03.592 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:03.592 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:03.592 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:03.592 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:03.592 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:03.592 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:03.592 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:03.592 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:03.592 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:03.592 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:03.592 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:03.592 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:03.592 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:03.592 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:05:03.592 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:03.592 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:03.592 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:03.592 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:03.592 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:03.592 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:03.592 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43859636 kB' 'MemAvailable: 47348228 kB' 'Buffers: 2704 kB' 'Cached: 12257552 kB' 'SwapCached: 0 kB' 'Active: 9263544 kB' 'Inactive: 3491988 kB' 'Active(anon): 8870760 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498480 kB' 'Mapped: 171244 kB' 'Shmem: 8375484 kB' 'KReclaimable: 195320 kB' 'Slab: 562404 kB' 'SReclaimable: 195320 kB' 'SUnreclaim: 367084 kB' 'KernelStack: 12720 kB' 'PageTables: 7808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9959560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 14936064 kB' 'DirectMap1G: 52428800 kB' 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.593 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43860432 kB' 'MemAvailable: 47349024 kB' 'Buffers: 2704 kB' 'Cached: 12257556 kB' 'SwapCached: 0 kB' 'Active: 9263164 kB' 'Inactive: 3491988 kB' 'Active(anon): 8870380 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498148 kB' 'Mapped: 171232 kB' 'Shmem: 8375488 kB' 'KReclaimable: 195320 kB' 'Slab: 562404 kB' 'SReclaimable: 195320 kB' 'SUnreclaim: 367084 kB' 'KernelStack: 12720 kB' 'PageTables: 7804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9959580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 14936064 kB' 'DirectMap1G: 52428800 kB' 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.594 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.595 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43860340 kB' 'MemAvailable: 47348932 kB' 'Buffers: 2704 kB' 'Cached: 12257572 kB' 'SwapCached: 0 kB' 'Active: 9263092 kB' 'Inactive: 3491988 kB' 'Active(anon): 8870308 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 497992 kB' 'Mapped: 171156 kB' 'Shmem: 8375504 kB' 'KReclaimable: 195320 kB' 'Slab: 562432 kB' 'SReclaimable: 195320 kB' 'SUnreclaim: 367112 kB' 'KernelStack: 12736 kB' 'PageTables: 7804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9959604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 14936064 kB' 'DirectMap1G: 52428800 kB' 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.596 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.597 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:03.598 nr_hugepages=1024 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:03.598 resv_hugepages=0 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:03.598 surplus_hugepages=0 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:03.598 anon_hugepages=0 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.598 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43860340 kB' 'MemAvailable: 47348932 kB' 'Buffers: 2704 kB' 'Cached: 12257596 kB' 'SwapCached: 0 kB' 'Active: 9263108 kB' 'Inactive: 3491988 kB' 'Active(anon): 8870324 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 497988 kB' 'Mapped: 171156 kB' 'Shmem: 8375528 kB' 'KReclaimable: 195320 kB' 'Slab: 562432 kB' 'SReclaimable: 195320 kB' 'SUnreclaim: 367112 kB' 'KernelStack: 12736 kB' 'PageTables: 7804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9959624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 14936064 kB' 'DirectMap1G: 52428800 kB' 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.599 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.600 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.601 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20891228 kB' 'MemUsed: 11985712 kB' 'SwapCached: 0 kB' 'Active: 6418648 kB' 'Inactive: 3354812 kB' 'Active(anon): 6146756 kB' 'Inactive(anon): 0 kB' 'Active(file): 271892 kB' 'Inactive(file): 3354812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9649408 kB' 'Mapped: 95308 kB' 'AnonPages: 127192 kB' 'Shmem: 6022704 kB' 'KernelStack: 6648 kB' 'PageTables: 3540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 93260 kB' 'Slab: 318916 kB' 'SReclaimable: 93260 kB' 'SUnreclaim: 225656 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:03.601 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.601 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.601 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.601 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.601 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.601 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.601 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.601 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.601 00:47:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.601 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 22969592 kB' 'MemUsed: 4695180 kB' 'SwapCached: 0 kB' 'Active: 2844296 kB' 'Inactive: 137176 kB' 'Active(anon): 2723404 kB' 'Inactive(anon): 0 kB' 'Active(file): 120892 kB' 'Inactive(file): 137176 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2610936 kB' 'Mapped: 75848 kB' 'AnonPages: 370572 kB' 'Shmem: 2352868 kB' 'KernelStack: 6072 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102060 kB' 'Slab: 243516 kB' 'SReclaimable: 102060 kB' 'SUnreclaim: 141456 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.861 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:03.862 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:03.863 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:03.863 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:03.863 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:03.863 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:03.863 node0=512 expecting 512 00:05:03.863 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:03.863 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:03.863 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:03.863 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:03.863 node1=512 expecting 512 00:05:03.863 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:03.863 00:05:03.863 real 0m1.455s 00:05:03.863 user 0m0.633s 00:05:03.863 sys 0m0.780s 00:05:03.863 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.863 00:47:34 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:03.863 ************************************ 00:05:03.863 END TEST per_node_1G_alloc 00:05:03.863 ************************************ 00:05:03.863 00:47:34 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:03.863 00:47:34 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:03.863 00:47:34 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:03.863 00:47:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:03.863 ************************************ 00:05:03.863 START TEST even_2G_alloc 00:05:03.863 ************************************ 00:05:03.863 00:47:34 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:05:03.863 00:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:03.863 00:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:03.863 00:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:03.863 00:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:03.863 00:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:03.863 00:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:03.863 00:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:03.863 00:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:03.863 00:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:03.863 00:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:03.863 00:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:03.863 00:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:03.863 00:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:03.863 00:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:03.863 00:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:03.863 00:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:03.863 00:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:05:03.863 00:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:03.863 00:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:03.863 00:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:03.863 00:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:03.863 00:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:03.863 00:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:03.863 00:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:03.863 00:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:03.863 00:47:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:03.863 00:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.863 00:47:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:04.805 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:04.805 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:04.805 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:04.805 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:04.805 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:04.805 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:04.805 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:04.805 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:04.805 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:04.805 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:04.805 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:04.805 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:04.805 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:04.805 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:04.805 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:04.805 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:04.805 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43855456 kB' 'MemAvailable: 47344048 kB' 'Buffers: 2704 kB' 'Cached: 12257692 kB' 'SwapCached: 0 kB' 'Active: 9263736 kB' 'Inactive: 3491988 kB' 'Active(anon): 8870952 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498644 kB' 'Mapped: 171264 kB' 'Shmem: 8375624 kB' 'KReclaimable: 195320 kB' 'Slab: 562816 kB' 'SReclaimable: 195320 kB' 'SUnreclaim: 367496 kB' 'KernelStack: 12736 kB' 'PageTables: 7852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9959832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 14936064 kB' 'DirectMap1G: 52428800 kB' 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.070 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.071 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43855884 kB' 'MemAvailable: 47344476 kB' 'Buffers: 2704 kB' 'Cached: 12257696 kB' 'SwapCached: 0 kB' 'Active: 9263572 kB' 'Inactive: 3491988 kB' 'Active(anon): 8870788 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498488 kB' 'Mapped: 171252 kB' 'Shmem: 8375628 kB' 'KReclaimable: 195320 kB' 'Slab: 562816 kB' 'SReclaimable: 195320 kB' 'SUnreclaim: 367496 kB' 'KernelStack: 12768 kB' 'PageTables: 7876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9959848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 14936064 kB' 'DirectMap1G: 52428800 kB' 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.072 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:05.073 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43855884 kB' 'MemAvailable: 47344476 kB' 'Buffers: 2704 kB' 'Cached: 12257696 kB' 'SwapCached: 0 kB' 'Active: 9263312 kB' 'Inactive: 3491988 kB' 'Active(anon): 8870528 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498232 kB' 'Mapped: 171252 kB' 'Shmem: 8375628 kB' 'KReclaimable: 195320 kB' 'Slab: 562816 kB' 'SReclaimable: 195320 kB' 'SUnreclaim: 367496 kB' 'KernelStack: 12768 kB' 'PageTables: 7836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9959868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 14936064 kB' 'DirectMap1G: 52428800 kB' 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.074 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.075 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:05.076 nr_hugepages=1024 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:05.076 resv_hugepages=0 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:05.076 surplus_hugepages=0 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:05.076 anon_hugepages=0 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43856816 kB' 'MemAvailable: 47345408 kB' 'Buffers: 2704 kB' 'Cached: 12257736 kB' 'SwapCached: 0 kB' 'Active: 9263360 kB' 'Inactive: 3491988 kB' 'Active(anon): 8870576 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498224 kB' 'Mapped: 171172 kB' 'Shmem: 8375668 kB' 'KReclaimable: 195320 kB' 'Slab: 562816 kB' 'SReclaimable: 195320 kB' 'SUnreclaim: 367496 kB' 'KernelStack: 12752 kB' 'PageTables: 7772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9964004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 14936064 kB' 'DirectMap1G: 52428800 kB' 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.076 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:05.077 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20891232 kB' 'MemUsed: 11985708 kB' 'SwapCached: 0 kB' 'Active: 6419108 kB' 'Inactive: 3354812 kB' 'Active(anon): 6147216 kB' 'Inactive(anon): 0 kB' 'Active(file): 271892 kB' 'Inactive(file): 3354812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9649420 kB' 'Mapped: 95308 kB' 'AnonPages: 127704 kB' 'Shmem: 6022716 kB' 'KernelStack: 6664 kB' 'PageTables: 3544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 93260 kB' 'Slab: 319216 kB' 'SReclaimable: 93260 kB' 'SUnreclaim: 225956 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.078 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 22965032 kB' 'MemUsed: 4699740 kB' 'SwapCached: 0 kB' 'Active: 2843932 kB' 'Inactive: 137176 kB' 'Active(anon): 2723040 kB' 'Inactive(anon): 0 kB' 'Active(file): 120892 kB' 'Inactive(file): 137176 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2611040 kB' 'Mapped: 75864 kB' 'AnonPages: 370180 kB' 'Shmem: 2352972 kB' 'KernelStack: 6040 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102060 kB' 'Slab: 243592 kB' 'SReclaimable: 102060 kB' 'SUnreclaim: 141532 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.079 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.080 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.081 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.081 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.081 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.081 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.081 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.081 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.081 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.081 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.081 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:05.081 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:05.081 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:05.081 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:05.081 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:05.081 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:05.081 node0=512 expecting 512 00:05:05.081 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:05.081 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:05.081 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:05.081 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:05.081 node1=512 expecting 512 00:05:05.081 00:47:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:05.081 00:05:05.081 real 0m1.395s 00:05:05.081 user 0m0.598s 00:05:05.081 sys 0m0.751s 00:05:05.081 00:47:35 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:05.081 00:47:35 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:05.081 ************************************ 00:05:05.081 END TEST even_2G_alloc 00:05:05.081 ************************************ 00:05:05.340 00:47:35 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:05.340 00:47:35 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.340 00:47:35 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.340 00:47:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:05.340 ************************************ 00:05:05.340 START TEST odd_alloc 00:05:05.340 ************************************ 00:05:05.340 00:47:35 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:05:05.340 00:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:05.340 00:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:05.340 00:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:05.340 00:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:05.340 00:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:05.340 00:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:05.340 00:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:05.340 00:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:05.340 00:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:05.340 00:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:05.340 00:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:05.340 00:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:05.340 00:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:05.340 00:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:05.340 00:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:05.340 00:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:05.340 00:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:05:05.340 00:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:05.340 00:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:05.340 00:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:05:05.340 00:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:05.340 00:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:05.340 00:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:05.340 00:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:05.340 00:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:05.340 00:47:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:05.340 00:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.340 00:47:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:06.278 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:06.278 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:06.278 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:06.278 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:06.278 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:06.278 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:06.278 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:06.278 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:06.278 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:06.278 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:06.278 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:06.278 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:06.278 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:06.278 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:06.278 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:06.278 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:06.278 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43855228 kB' 'MemAvailable: 47343804 kB' 'Buffers: 2704 kB' 'Cached: 12257820 kB' 'SwapCached: 0 kB' 'Active: 9267140 kB' 'Inactive: 3491988 kB' 'Active(anon): 8874356 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501720 kB' 'Mapped: 171272 kB' 'Shmem: 8375752 kB' 'KReclaimable: 195288 kB' 'Slab: 562560 kB' 'SReclaimable: 195288 kB' 'SUnreclaim: 367272 kB' 'KernelStack: 13120 kB' 'PageTables: 8764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 9953244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196276 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 14936064 kB' 'DirectMap1G: 52428800 kB' 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.544 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.545 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43858972 kB' 'MemAvailable: 47347548 kB' 'Buffers: 2704 kB' 'Cached: 12257824 kB' 'SwapCached: 0 kB' 'Active: 9261740 kB' 'Inactive: 3491988 kB' 'Active(anon): 8868956 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496756 kB' 'Mapped: 171252 kB' 'Shmem: 8375756 kB' 'KReclaimable: 195288 kB' 'Slab: 562260 kB' 'SReclaimable: 195288 kB' 'SUnreclaim: 366972 kB' 'KernelStack: 12864 kB' 'PageTables: 8124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 9949156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 14936064 kB' 'DirectMap1G: 52428800 kB' 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.546 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.547 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43853276 kB' 'MemAvailable: 47341852 kB' 'Buffers: 2704 kB' 'Cached: 12257824 kB' 'SwapCached: 0 kB' 'Active: 9265536 kB' 'Inactive: 3491988 kB' 'Active(anon): 8872752 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 500120 kB' 'Mapped: 170772 kB' 'Shmem: 8375756 kB' 'KReclaimable: 195288 kB' 'Slab: 562280 kB' 'SReclaimable: 195288 kB' 'SUnreclaim: 366992 kB' 'KernelStack: 12960 kB' 'PageTables: 9052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 9951964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196224 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 14936064 kB' 'DirectMap1G: 52428800 kB' 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.548 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.549 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:06.550 nr_hugepages=1025 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:06.550 resv_hugepages=0 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:06.550 surplus_hugepages=0 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:06.550 anon_hugepages=0 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43852552 kB' 'MemAvailable: 47341128 kB' 'Buffers: 2704 kB' 'Cached: 12257856 kB' 'SwapCached: 0 kB' 'Active: 9266196 kB' 'Inactive: 3491988 kB' 'Active(anon): 8873412 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 500740 kB' 'Mapped: 171156 kB' 'Shmem: 8375788 kB' 'KReclaimable: 195288 kB' 'Slab: 562280 kB' 'SReclaimable: 195288 kB' 'SUnreclaim: 366992 kB' 'KernelStack: 12704 kB' 'PageTables: 8072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 9950940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195972 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 14936064 kB' 'DirectMap1G: 52428800 kB' 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.550 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:06.551 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20914864 kB' 'MemUsed: 11962076 kB' 'SwapCached: 0 kB' 'Active: 6415512 kB' 'Inactive: 3354812 kB' 'Active(anon): 6143620 kB' 'Inactive(anon): 0 kB' 'Active(file): 271892 kB' 'Inactive(file): 3354812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9649468 kB' 'Mapped: 95388 kB' 'AnonPages: 123956 kB' 'Shmem: 6022764 kB' 'KernelStack: 6568 kB' 'PageTables: 3036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 93236 kB' 'Slab: 318888 kB' 'SReclaimable: 93236 kB' 'SUnreclaim: 225652 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 22937848 kB' 'MemUsed: 4726924 kB' 'SwapCached: 0 kB' 'Active: 2844340 kB' 'Inactive: 137176 kB' 'Active(anon): 2723448 kB' 'Inactive(anon): 0 kB' 'Active(file): 120892 kB' 'Inactive(file): 137176 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2611092 kB' 'Mapped: 75784 kB' 'AnonPages: 370544 kB' 'Shmem: 2353024 kB' 'KernelStack: 6120 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102052 kB' 'Slab: 243328 kB' 'SReclaimable: 102052 kB' 'SUnreclaim: 141276 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.553 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:05:06.554 node0=512 expecting 513 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:05:06.554 node1=513 expecting 512 00:05:06.554 00:47:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:05:06.554 00:05:06.554 real 0m1.435s 00:05:06.554 user 0m0.576s 00:05:06.554 sys 0m0.820s 00:05:06.555 00:47:36 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:06.555 00:47:36 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:06.555 ************************************ 00:05:06.555 END TEST odd_alloc 00:05:06.555 ************************************ 00:05:06.814 00:47:36 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:06.814 00:47:36 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:06.814 00:47:36 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.814 00:47:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:06.814 ************************************ 00:05:06.814 START TEST custom_alloc 00:05:06.814 ************************************ 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.814 00:47:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:07.752 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:07.752 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:07.752 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:07.752 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:07.752 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:07.752 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:07.752 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:07.752 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:07.752 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:07.752 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:07.752 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:07.753 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:07.753 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:07.753 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:07.753 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:07.753 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:07.753 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:08.018 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:05:08.018 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:08.018 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:08.018 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:08.018 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:08.018 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:08.018 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:08.018 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:08.018 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:08.018 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:08.018 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:08.018 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:08.018 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:08.018 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.018 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.018 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.018 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.018 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.018 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.018 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.018 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.018 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42796464 kB' 'MemAvailable: 46285040 kB' 'Buffers: 2704 kB' 'Cached: 12257956 kB' 'SwapCached: 0 kB' 'Active: 9260632 kB' 'Inactive: 3491988 kB' 'Active(anon): 8867848 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495332 kB' 'Mapped: 170456 kB' 'Shmem: 8375888 kB' 'KReclaimable: 195288 kB' 'Slab: 562152 kB' 'SReclaimable: 195288 kB' 'SUnreclaim: 366864 kB' 'KernelStack: 12736 kB' 'PageTables: 7484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 9945020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 14936064 kB' 'DirectMap1G: 52428800 kB' 00:05:08.018 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.018 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.018 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.018 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.018 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.018 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.018 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.019 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42797336 kB' 'MemAvailable: 46285912 kB' 'Buffers: 2704 kB' 'Cached: 12257956 kB' 'SwapCached: 0 kB' 'Active: 9260472 kB' 'Inactive: 3491988 kB' 'Active(anon): 8867688 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495164 kB' 'Mapped: 170428 kB' 'Shmem: 8375888 kB' 'KReclaimable: 195288 kB' 'Slab: 562152 kB' 'SReclaimable: 195288 kB' 'SUnreclaim: 366864 kB' 'KernelStack: 12768 kB' 'PageTables: 7520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 9945040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 14936064 kB' 'DirectMap1G: 52428800 kB' 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.020 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.021 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42796896 kB' 'MemAvailable: 46285472 kB' 'Buffers: 2704 kB' 'Cached: 12257976 kB' 'SwapCached: 0 kB' 'Active: 9259860 kB' 'Inactive: 3491988 kB' 'Active(anon): 8867076 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494456 kB' 'Mapped: 170348 kB' 'Shmem: 8375908 kB' 'KReclaimable: 195288 kB' 'Slab: 562120 kB' 'SReclaimable: 195288 kB' 'SUnreclaim: 366832 kB' 'KernelStack: 12736 kB' 'PageTables: 7412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 9945060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 14936064 kB' 'DirectMap1G: 52428800 kB' 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.022 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.023 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:05:08.024 nr_hugepages=1536 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:08.024 resv_hugepages=0 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:08.024 surplus_hugepages=0 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:08.024 anon_hugepages=0 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42796896 kB' 'MemAvailable: 46285472 kB' 'Buffers: 2704 kB' 'Cached: 12258000 kB' 'SwapCached: 0 kB' 'Active: 9259852 kB' 'Inactive: 3491988 kB' 'Active(anon): 8867068 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494424 kB' 'Mapped: 170348 kB' 'Shmem: 8375932 kB' 'KReclaimable: 195288 kB' 'Slab: 562120 kB' 'SReclaimable: 195288 kB' 'SUnreclaim: 366832 kB' 'KernelStack: 12720 kB' 'PageTables: 7368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 9945080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 14936064 kB' 'DirectMap1G: 52428800 kB' 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.024 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.025 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20915940 kB' 'MemUsed: 11961000 kB' 'SwapCached: 0 kB' 'Active: 6415416 kB' 'Inactive: 3354812 kB' 'Active(anon): 6143524 kB' 'Inactive(anon): 0 kB' 'Active(file): 271892 kB' 'Inactive(file): 3354812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9649620 kB' 'Mapped: 94704 kB' 'AnonPages: 123716 kB' 'Shmem: 6022916 kB' 'KernelStack: 6584 kB' 'PageTables: 3116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 93236 kB' 'Slab: 318900 kB' 'SReclaimable: 93236 kB' 'SUnreclaim: 225664 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.026 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.027 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:08.028 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:08.028 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:08.028 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:08.028 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:08.028 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:08.028 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:05:08.028 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:08.028 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.028 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.028 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:08.028 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:08.028 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.028 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.028 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.028 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.028 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 21881296 kB' 'MemUsed: 5783476 kB' 'SwapCached: 0 kB' 'Active: 2844500 kB' 'Inactive: 137176 kB' 'Active(anon): 2723608 kB' 'Inactive(anon): 0 kB' 'Active(file): 120892 kB' 'Inactive(file): 137176 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2611100 kB' 'Mapped: 75644 kB' 'AnonPages: 370760 kB' 'Shmem: 2353032 kB' 'KernelStack: 6152 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102052 kB' 'Slab: 243220 kB' 'SReclaimable: 102052 kB' 'SUnreclaim: 141168 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:08.028 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.028 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.028 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.028 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.028 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.028 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.028 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.028 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.028 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.028 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.028 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.028 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.028 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.028 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.028 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.028 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.028 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.028 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.028 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.028 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.028 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.028 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.028 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.028 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.028 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.313 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.314 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.314 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.314 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.314 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.314 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.314 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.314 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.314 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.314 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.314 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.314 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.314 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.314 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.314 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.314 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.314 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.314 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.314 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.314 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.314 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.314 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.314 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.314 00:47:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:08.314 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:08.314 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:08.314 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:08.314 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:08.314 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:08.314 node0=512 expecting 512 00:05:08.314 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:08.314 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:08.314 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:08.314 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:05:08.314 node1=1024 expecting 1024 00:05:08.314 00:47:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:05:08.314 00:05:08.314 real 0m1.451s 00:05:08.314 user 0m0.620s 00:05:08.314 sys 0m0.795s 00:05:08.314 00:47:38 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.314 00:47:38 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:08.314 ************************************ 00:05:08.314 END TEST custom_alloc 00:05:08.314 ************************************ 00:05:08.314 00:47:38 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:08.314 00:47:38 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:08.314 00:47:38 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.314 00:47:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:08.314 ************************************ 00:05:08.314 START TEST no_shrink_alloc 00:05:08.314 ************************************ 00:05:08.314 00:47:38 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:05:08.314 00:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:08.314 00:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:08.314 00:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:08.314 00:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:08.314 00:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:08.314 00:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:08.314 00:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:08.314 00:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:08.314 00:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:08.314 00:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:08.314 00:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:08.314 00:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:08.314 00:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:08.314 00:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:08.314 00:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:08.314 00:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:08.314 00:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:08.314 00:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:08.314 00:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:08.314 00:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:08.314 00:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:08.314 00:47:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:09.253 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:09.253 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:09.253 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:09.253 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:09.253 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:09.253 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:09.253 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:09.253 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:09.253 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:09.253 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:09.253 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:09.253 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:09.253 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:09.253 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:09.253 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:09.253 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:09.253 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:09.518 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:09.518 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:09.518 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:09.518 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:09.518 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:09.518 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:09.518 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:09.518 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:09.518 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:09.518 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:09.518 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:09.518 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:09.518 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.518 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.518 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.518 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.518 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.518 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.518 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.518 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43707772 kB' 'MemAvailable: 47196348 kB' 'Buffers: 2704 kB' 'Cached: 12258088 kB' 'SwapCached: 0 kB' 'Active: 9261116 kB' 'Inactive: 3491988 kB' 'Active(anon): 8868332 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495588 kB' 'Mapped: 170460 kB' 'Shmem: 8376020 kB' 'KReclaimable: 195288 kB' 'Slab: 562148 kB' 'SReclaimable: 195288 kB' 'SUnreclaim: 366860 kB' 'KernelStack: 12784 kB' 'PageTables: 7596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9945492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195952 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 14936064 kB' 'DirectMap1G: 52428800 kB' 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.519 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43708256 kB' 'MemAvailable: 47196832 kB' 'Buffers: 2704 kB' 'Cached: 12258092 kB' 'SwapCached: 0 kB' 'Active: 9260756 kB' 'Inactive: 3491988 kB' 'Active(anon): 8867972 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495244 kB' 'Mapped: 170444 kB' 'Shmem: 8376024 kB' 'KReclaimable: 195288 kB' 'Slab: 562116 kB' 'SReclaimable: 195288 kB' 'SUnreclaim: 366828 kB' 'KernelStack: 12768 kB' 'PageTables: 7528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9945508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195920 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 14936064 kB' 'DirectMap1G: 52428800 kB' 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.520 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.521 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43708496 kB' 'MemAvailable: 47197072 kB' 'Buffers: 2704 kB' 'Cached: 12258108 kB' 'SwapCached: 0 kB' 'Active: 9260456 kB' 'Inactive: 3491988 kB' 'Active(anon): 8867672 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494872 kB' 'Mapped: 170364 kB' 'Shmem: 8376040 kB' 'KReclaimable: 195288 kB' 'Slab: 562076 kB' 'SReclaimable: 195288 kB' 'SUnreclaim: 366788 kB' 'KernelStack: 12752 kB' 'PageTables: 7460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9945532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195904 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 14936064 kB' 'DirectMap1G: 52428800 kB' 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.522 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.523 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:09.524 nr_hugepages=1024 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:09.524 resv_hugepages=0 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:09.524 surplus_hugepages=0 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:09.524 anon_hugepages=0 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43708496 kB' 'MemAvailable: 47197072 kB' 'Buffers: 2704 kB' 'Cached: 12258128 kB' 'SwapCached: 0 kB' 'Active: 9260480 kB' 'Inactive: 3491988 kB' 'Active(anon): 8867696 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494876 kB' 'Mapped: 170364 kB' 'Shmem: 8376060 kB' 'KReclaimable: 195288 kB' 'Slab: 562076 kB' 'SReclaimable: 195288 kB' 'SUnreclaim: 366788 kB' 'KernelStack: 12752 kB' 'PageTables: 7460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9945556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195888 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 14936064 kB' 'DirectMap1G: 52428800 kB' 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.524 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.525 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19868536 kB' 'MemUsed: 13008404 kB' 'SwapCached: 0 kB' 'Active: 6415424 kB' 'Inactive: 3354812 kB' 'Active(anon): 6143532 kB' 'Inactive(anon): 0 kB' 'Active(file): 271892 kB' 'Inactive(file): 3354812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9649696 kB' 'Mapped: 94704 kB' 'AnonPages: 123676 kB' 'Shmem: 6022992 kB' 'KernelStack: 6600 kB' 'PageTables: 3160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 93236 kB' 'Slab: 318808 kB' 'SReclaimable: 93236 kB' 'SUnreclaim: 225572 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.526 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:09.527 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:09.528 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:09.528 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:09.528 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:09.528 node0=1024 expecting 1024 00:05:09.528 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:09.528 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:09.528 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:09.528 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:09.528 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.528 00:47:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:10.906 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:10.906 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:10.906 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:10.906 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:10.906 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:10.906 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:10.906 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:10.906 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:10.906 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:10.906 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:10.906 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:10.906 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:10.906 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:10.906 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:10.906 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:10.906 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:10.906 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:10.906 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:10.906 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:10.906 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:10.906 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:10.906 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:10.906 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:10.906 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:10.906 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:10.906 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:10.906 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:10.906 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:10.906 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:10.906 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:10.906 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.906 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.906 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.906 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.906 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.906 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.906 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.906 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.906 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43719164 kB' 'MemAvailable: 47207732 kB' 'Buffers: 2704 kB' 'Cached: 12258200 kB' 'SwapCached: 0 kB' 'Active: 9259988 kB' 'Inactive: 3491988 kB' 'Active(anon): 8867204 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494256 kB' 'Mapped: 170368 kB' 'Shmem: 8376132 kB' 'KReclaimable: 195272 kB' 'Slab: 562012 kB' 'SReclaimable: 195272 kB' 'SUnreclaim: 366740 kB' 'KernelStack: 12720 kB' 'PageTables: 7364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9945736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 14936064 kB' 'DirectMap1G: 52428800 kB' 00:05:10.906 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.906 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.906 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.906 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.906 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.906 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.906 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.906 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.906 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.906 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.906 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.906 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.906 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.906 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.906 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.906 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.906 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.907 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43719184 kB' 'MemAvailable: 47207752 kB' 'Buffers: 2704 kB' 'Cached: 12258200 kB' 'SwapCached: 0 kB' 'Active: 9260188 kB' 'Inactive: 3491988 kB' 'Active(anon): 8867404 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494468 kB' 'Mapped: 170368 kB' 'Shmem: 8376132 kB' 'KReclaimable: 195272 kB' 'Slab: 562012 kB' 'SReclaimable: 195272 kB' 'SUnreclaim: 366740 kB' 'KernelStack: 12784 kB' 'PageTables: 7480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9945752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 14936064 kB' 'DirectMap1G: 52428800 kB' 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.908 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.909 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43720096 kB' 'MemAvailable: 47208664 kB' 'Buffers: 2704 kB' 'Cached: 12258220 kB' 'SwapCached: 0 kB' 'Active: 9260108 kB' 'Inactive: 3491988 kB' 'Active(anon): 8867324 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494356 kB' 'Mapped: 170368 kB' 'Shmem: 8376152 kB' 'KReclaimable: 195272 kB' 'Slab: 562076 kB' 'SReclaimable: 195272 kB' 'SUnreclaim: 366804 kB' 'KernelStack: 12800 kB' 'PageTables: 7456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9945776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 14936064 kB' 'DirectMap1G: 52428800 kB' 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.910 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.911 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:10.912 nr_hugepages=1024 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:10.912 resv_hugepages=0 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:10.912 surplus_hugepages=0 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:10.912 anon_hugepages=0 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43720096 kB' 'MemAvailable: 47208664 kB' 'Buffers: 2704 kB' 'Cached: 12258240 kB' 'SwapCached: 0 kB' 'Active: 9260120 kB' 'Inactive: 3491988 kB' 'Active(anon): 8867336 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494352 kB' 'Mapped: 170368 kB' 'Shmem: 8376172 kB' 'KReclaimable: 195272 kB' 'Slab: 562076 kB' 'SReclaimable: 195272 kB' 'SUnreclaim: 366804 kB' 'KernelStack: 12800 kB' 'PageTables: 7456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9945796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 14936064 kB' 'DirectMap1G: 52428800 kB' 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.912 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.913 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19883160 kB' 'MemUsed: 12993780 kB' 'SwapCached: 0 kB' 'Active: 6415284 kB' 'Inactive: 3354812 kB' 'Active(anon): 6143392 kB' 'Inactive(anon): 0 kB' 'Active(file): 271892 kB' 'Inactive(file): 3354812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9649696 kB' 'Mapped: 94704 kB' 'AnonPages: 123496 kB' 'Shmem: 6022992 kB' 'KernelStack: 6616 kB' 'PageTables: 3200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 93220 kB' 'Slab: 318800 kB' 'SReclaimable: 93220 kB' 'SUnreclaim: 225580 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.914 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:10.915 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:10.915 node0=1024 expecting 1024 00:05:10.916 00:47:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:10.916 00:05:10.916 real 0m2.757s 00:05:10.916 user 0m1.193s 00:05:10.916 sys 0m1.478s 00:05:10.916 00:47:41 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.916 00:47:41 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:10.916 ************************************ 00:05:10.916 END TEST no_shrink_alloc 00:05:10.916 ************************************ 00:05:10.916 00:47:41 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:10.916 00:47:41 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:10.916 00:47:41 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:10.916 00:47:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:10.916 00:47:41 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:10.916 00:47:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:10.916 00:47:41 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:10.916 00:47:41 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:10.916 00:47:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:10.916 00:47:41 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:10.916 00:47:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:10.916 00:47:41 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:10.916 00:47:41 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:10.916 00:47:41 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:10.916 00:05:10.916 real 0m11.346s 00:05:10.916 user 0m4.428s 00:05:10.916 sys 0m5.817s 00:05:10.916 00:47:41 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.916 00:47:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:10.916 ************************************ 00:05:10.916 END TEST hugepages 00:05:10.916 ************************************ 00:05:10.916 00:47:41 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:10.916 00:47:41 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.916 00:47:41 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.916 00:47:41 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:11.173 ************************************ 00:05:11.173 START TEST driver 00:05:11.173 ************************************ 00:05:11.173 00:47:41 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:11.173 * Looking for test storage... 00:05:11.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:11.173 00:47:41 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:11.173 00:47:41 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:11.173 00:47:41 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:13.708 00:47:43 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:13.708 00:47:43 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.708 00:47:43 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.708 00:47:43 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:13.708 ************************************ 00:05:13.708 START TEST guess_driver 00:05:13.708 ************************************ 00:05:13.708 00:47:43 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:05:13.708 00:47:43 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:13.708 00:47:43 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:13.708 00:47:43 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:13.708 00:47:43 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:13.708 00:47:43 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:13.708 00:47:43 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:13.708 00:47:43 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:13.708 00:47:43 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:13.708 00:47:43 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:13.708 00:47:43 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:05:13.708 00:47:43 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:13.708 00:47:43 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:13.708 00:47:43 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:13.708 00:47:43 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:13.708 00:47:43 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:13.708 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:13.708 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:13.708 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:13.708 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:13.708 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:13.708 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:13.708 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:13.708 00:47:43 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:13.708 00:47:43 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:13.708 00:47:43 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:13.708 00:47:43 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:13.708 00:47:43 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:13.708 Looking for driver=vfio-pci 00:05:13.708 00:47:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:13.708 00:47:43 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:13.708 00:47:43 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.708 00:47:43 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:14.645 00:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:14.645 00:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:14.645 00:47:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:14.645 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:14.645 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:14.645 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:14.645 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:14.645 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:14.645 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:14.645 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:14.645 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:14.645 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:14.645 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:14.645 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:14.645 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:14.645 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:14.645 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:14.645 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:14.645 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:14.645 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:14.645 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:14.645 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:14.645 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:14.645 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:14.909 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:14.909 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:14.909 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:14.909 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:14.909 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:14.909 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:14.909 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:14.909 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:14.909 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:14.909 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:14.909 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:14.909 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:14.909 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:14.909 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:14.909 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:14.909 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:14.909 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:14.909 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:14.909 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:14.909 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:14.909 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:14.909 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:14.909 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:14.909 00:47:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:15.850 00:47:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:15.850 00:47:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:15.850 00:47:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:15.850 00:47:46 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:15.850 00:47:46 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:15.850 00:47:46 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:15.850 00:47:46 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:18.388 00:05:18.388 real 0m4.731s 00:05:18.388 user 0m1.064s 00:05:18.388 sys 0m1.760s 00:05:18.388 00:47:48 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.388 00:47:48 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:18.388 ************************************ 00:05:18.388 END TEST guess_driver 00:05:18.388 ************************************ 00:05:18.388 00:05:18.388 real 0m7.252s 00:05:18.389 user 0m1.635s 00:05:18.389 sys 0m2.733s 00:05:18.389 00:47:48 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.389 00:47:48 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:18.389 ************************************ 00:05:18.389 END TEST driver 00:05:18.389 ************************************ 00:05:18.389 00:47:48 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:18.389 00:47:48 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.389 00:47:48 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.389 00:47:48 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:18.389 ************************************ 00:05:18.389 START TEST devices 00:05:18.389 ************************************ 00:05:18.389 00:47:48 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:18.389 * Looking for test storage... 00:05:18.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:18.389 00:47:48 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:18.389 00:47:48 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:18.389 00:47:48 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:18.389 00:47:48 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:19.764 00:47:50 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:19.764 00:47:50 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:19.764 00:47:50 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:19.764 00:47:50 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:19.764 00:47:50 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:19.764 00:47:50 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:19.764 00:47:50 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:19.764 00:47:50 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:19.764 00:47:50 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:19.764 00:47:50 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:19.764 00:47:50 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:19.764 00:47:50 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:19.764 00:47:50 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:19.764 00:47:50 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:19.764 00:47:50 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:19.764 00:47:50 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:19.764 00:47:50 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:19.764 00:47:50 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:05:19.764 00:47:50 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:05:19.764 00:47:50 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:19.764 00:47:50 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:19.764 00:47:50 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:19.764 No valid GPT data, bailing 00:05:19.764 00:47:50 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:19.764 00:47:50 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:19.764 00:47:50 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:19.764 00:47:50 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:19.764 00:47:50 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:19.764 00:47:50 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:19.764 00:47:50 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:05:19.764 00:47:50 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:05:19.764 00:47:50 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:19.764 00:47:50 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:05:19.764 00:47:50 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:19.764 00:47:50 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:19.764 00:47:50 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:19.764 00:47:50 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.764 00:47:50 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.764 00:47:50 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:20.023 ************************************ 00:05:20.023 START TEST nvme_mount 00:05:20.023 ************************************ 00:05:20.023 00:47:50 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:05:20.023 00:47:50 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:20.023 00:47:50 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:20.023 00:47:50 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:20.023 00:47:50 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:20.023 00:47:50 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:20.023 00:47:50 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:20.023 00:47:50 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:20.023 00:47:50 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:20.023 00:47:50 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:20.023 00:47:50 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:20.023 00:47:50 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:20.023 00:47:50 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:20.023 00:47:50 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:20.023 00:47:50 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:20.023 00:47:50 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:20.023 00:47:50 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:20.023 00:47:50 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:20.023 00:47:50 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:20.023 00:47:50 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:20.963 Creating new GPT entries in memory. 00:05:20.963 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:20.963 other utilities. 00:05:20.963 00:47:51 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:20.963 00:47:51 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:20.963 00:47:51 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:20.963 00:47:51 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:20.963 00:47:51 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:21.902 Creating new GPT entries in memory. 00:05:21.902 The operation has completed successfully. 00:05:21.902 00:47:52 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:21.902 00:47:52 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:21.902 00:47:52 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1687089 00:05:21.902 00:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:21.902 00:47:52 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:21.902 00:47:52 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:21.902 00:47:52 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:21.902 00:47:52 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:21.902 00:47:52 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:21.902 00:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:21.902 00:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:21.902 00:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:21.902 00:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:21.902 00:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:21.902 00:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:21.902 00:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:21.902 00:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:21.902 00:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:21.902 00:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.902 00:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:21.902 00:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:21.902 00:47:52 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.902 00:47:52 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:23.279 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:23.279 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:23.537 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:23.537 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:23.537 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:23.537 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:23.537 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:23.537 00:47:53 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:23.537 00:47:53 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:23.537 00:47:53 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:23.537 00:47:53 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:23.537 00:47:53 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:23.537 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:23.537 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:23.537 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:23.537 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:23.537 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:23.537 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:23.537 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:23.537 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:23.537 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:23.537 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.537 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:23.537 00:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:23.537 00:47:53 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:23.537 00:47:53 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:24.473 00:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.473 00:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:24.473 00:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:24.473 00:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.473 00:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.473 00:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.473 00:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.473 00:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.473 00:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.473 00:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.473 00:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.473 00:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.473 00:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.473 00:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.473 00:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.473 00:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.473 00:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.473 00:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.473 00:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.473 00:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.473 00:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.473 00:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.473 00:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.473 00:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.473 00:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.473 00:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.473 00:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.473 00:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.473 00:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.473 00:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.473 00:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.473 00:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.473 00:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.473 00:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.473 00:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.473 00:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.731 00:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:24.731 00:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:24.731 00:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:24.731 00:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:24.731 00:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:24.731 00:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:24.731 00:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:05:24.731 00:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:24.731 00:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:24.731 00:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:24.731 00:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:24.731 00:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:24.731 00:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:24.731 00:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:24.731 00:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.731 00:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:24.731 00:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:24.731 00:47:55 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:24.731 00:47:55 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:26.115 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.116 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:26.116 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:26.116 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.116 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.116 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.116 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.116 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.116 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.116 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.116 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.116 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.116 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.116 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.116 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.116 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.116 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.116 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.116 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.116 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.116 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.116 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.116 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.116 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.116 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.117 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.117 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.117 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.117 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.117 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.117 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.117 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.117 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.117 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.117 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.117 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.117 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:26.117 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:26.117 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:26.117 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:26.117 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:26.117 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:26.117 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:26.117 00:47:56 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:26.117 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:26.117 00:05:26.117 real 0m6.091s 00:05:26.117 user 0m1.415s 00:05:26.117 sys 0m2.244s 00:05:26.117 00:47:56 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.117 00:47:56 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:26.117 ************************************ 00:05:26.117 END TEST nvme_mount 00:05:26.117 ************************************ 00:05:26.117 00:47:56 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:26.117 00:47:56 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.117 00:47:56 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.117 00:47:56 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:26.117 ************************************ 00:05:26.117 START TEST dm_mount 00:05:26.117 ************************************ 00:05:26.117 00:47:56 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:05:26.117 00:47:56 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:26.117 00:47:56 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:26.117 00:47:56 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:26.117 00:47:56 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:26.117 00:47:56 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:26.117 00:47:56 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:26.117 00:47:56 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:26.117 00:47:56 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:26.117 00:47:56 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:26.117 00:47:56 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:26.117 00:47:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:26.117 00:47:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:26.117 00:47:56 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:26.117 00:47:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:26.117 00:47:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:26.117 00:47:56 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:26.117 00:47:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:26.117 00:47:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:26.117 00:47:56 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:26.117 00:47:56 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:26.117 00:47:56 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:27.054 Creating new GPT entries in memory. 00:05:27.054 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:27.054 other utilities. 00:05:27.054 00:47:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:27.054 00:47:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:27.054 00:47:57 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:27.054 00:47:57 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:27.054 00:47:57 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:27.992 Creating new GPT entries in memory. 00:05:27.992 The operation has completed successfully. 00:05:27.992 00:47:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:27.992 00:47:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:27.992 00:47:58 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:27.992 00:47:58 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:27.992 00:47:58 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:29.403 The operation has completed successfully. 00:05:29.403 00:47:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:29.403 00:47:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:29.403 00:47:59 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1689445 00:05:29.403 00:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:29.403 00:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:29.403 00:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:29.403 00:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:29.403 00:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:29.403 00:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:29.403 00:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:29.403 00:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:29.403 00:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:29.403 00:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:29.403 00:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:29.403 00:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:29.403 00:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:29.403 00:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:29.403 00:47:59 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:29.403 00:47:59 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:29.403 00:47:59 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:29.403 00:47:59 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:29.403 00:47:59 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:29.403 00:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:29.403 00:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:29.403 00:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:29.403 00:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:29.403 00:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:29.403 00:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:29.403 00:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:29.403 00:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:29.403 00:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:29.403 00:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.403 00:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:29.403 00:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:29.403 00:47:59 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:29.403 00:47:59 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:30.342 00:48:00 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:31.719 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:31.719 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:31.719 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:31.719 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.719 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:31.719 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.719 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:31.719 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.719 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:31.719 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.719 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:31.719 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.719 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:31.719 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.719 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:31.719 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.719 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:31.719 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.719 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:31.719 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.719 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:31.719 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.719 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:31.719 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.719 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:31.719 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.719 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:31.719 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.719 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:31.719 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.719 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:31.719 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.719 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:31.719 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.719 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:31.719 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.719 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:31.719 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:31.719 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:31.719 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:31.719 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:31.719 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:31.720 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:31.720 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:31.720 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:31.720 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:31.720 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:31.720 00:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:31.720 00:05:31.720 real 0m5.640s 00:05:31.720 user 0m0.936s 00:05:31.720 sys 0m1.558s 00:05:31.720 00:48:01 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.720 00:48:01 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:31.720 ************************************ 00:05:31.720 END TEST dm_mount 00:05:31.720 ************************************ 00:05:31.720 00:48:02 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:31.720 00:48:02 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:31.720 00:48:02 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:31.720 00:48:02 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:31.720 00:48:02 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:31.720 00:48:02 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:31.720 00:48:02 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:31.979 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:31.979 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:31.979 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:31.979 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:31.979 00:48:02 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:31.979 00:48:02 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:31.979 00:48:02 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:31.979 00:48:02 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:31.979 00:48:02 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:31.979 00:48:02 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:31.979 00:48:02 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:31.979 00:05:31.979 real 0m13.647s 00:05:31.979 user 0m2.995s 00:05:31.979 sys 0m4.836s 00:05:31.979 00:48:02 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.979 00:48:02 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:31.979 ************************************ 00:05:31.979 END TEST devices 00:05:31.979 ************************************ 00:05:31.979 00:05:31.979 real 0m42.791s 00:05:31.979 user 0m12.409s 00:05:31.979 sys 0m18.575s 00:05:31.979 00:48:02 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.979 00:48:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:31.979 ************************************ 00:05:31.979 END TEST setup.sh 00:05:31.979 ************************************ 00:05:31.979 00:48:02 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:32.916 Hugepages 00:05:32.916 node hugesize free / total 00:05:32.916 node0 1048576kB 0 / 0 00:05:33.174 node0 2048kB 2048 / 2048 00:05:33.174 node1 1048576kB 0 / 0 00:05:33.174 node1 2048kB 0 / 0 00:05:33.174 00:05:33.174 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:33.174 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:33.174 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:33.174 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:33.174 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:33.174 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:33.175 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:33.175 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:33.175 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:33.175 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:33.175 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:33.175 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:33.175 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:33.175 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:33.175 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:33.175 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:33.175 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:33.175 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:33.175 00:48:03 -- spdk/autotest.sh@130 -- # uname -s 00:05:33.175 00:48:03 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:33.175 00:48:03 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:33.175 00:48:03 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:34.110 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:34.110 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:34.110 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:34.110 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:34.110 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:34.369 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:34.369 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:34.369 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:34.369 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:34.369 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:34.369 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:34.369 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:34.369 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:34.369 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:34.369 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:34.369 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:35.308 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:35.308 00:48:05 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:36.246 00:48:06 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:36.246 00:48:06 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:36.246 00:48:06 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:36.246 00:48:06 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:36.246 00:48:06 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:36.246 00:48:06 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:36.246 00:48:06 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:36.246 00:48:06 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:36.246 00:48:06 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:36.504 00:48:06 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:36.504 00:48:06 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:05:36.504 00:48:06 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:37.440 Waiting for block devices as requested 00:05:37.699 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:37.699 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:37.958 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:37.958 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:37.958 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:37.958 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:38.217 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:38.217 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:38.217 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:38.217 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:38.476 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:38.476 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:38.476 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:38.476 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:38.476 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:38.735 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:38.735 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:38.735 00:48:09 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:38.735 00:48:09 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:38.735 00:48:09 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:38.735 00:48:09 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:05:38.735 00:48:09 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:38.735 00:48:09 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:38.735 00:48:09 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:38.735 00:48:09 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:38.735 00:48:09 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:38.735 00:48:09 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:38.735 00:48:09 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:38.735 00:48:09 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:38.735 00:48:09 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:38.735 00:48:09 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:05:38.735 00:48:09 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:38.735 00:48:09 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:38.735 00:48:09 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:38.735 00:48:09 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:38.735 00:48:09 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:38.735 00:48:09 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:38.735 00:48:09 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:38.735 00:48:09 -- common/autotest_common.sh@1557 -- # continue 00:05:38.735 00:48:09 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:38.735 00:48:09 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:38.735 00:48:09 -- common/autotest_common.sh@10 -- # set +x 00:05:38.735 00:48:09 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:38.735 00:48:09 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:38.735 00:48:09 -- common/autotest_common.sh@10 -- # set +x 00:05:38.735 00:48:09 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:40.111 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:40.111 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:40.111 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:40.111 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:40.111 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:40.111 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:40.111 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:40.111 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:40.111 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:40.111 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:40.111 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:40.111 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:40.111 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:40.111 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:40.111 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:40.111 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:41.046 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:41.046 00:48:11 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:41.046 00:48:11 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:41.046 00:48:11 -- common/autotest_common.sh@10 -- # set +x 00:05:41.304 00:48:11 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:41.304 00:48:11 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:41.304 00:48:11 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:41.304 00:48:11 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:41.304 00:48:11 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:41.304 00:48:11 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:41.304 00:48:11 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:41.304 00:48:11 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:41.304 00:48:11 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:41.304 00:48:11 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:41.304 00:48:11 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:41.304 00:48:11 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:41.304 00:48:11 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:05:41.304 00:48:11 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:41.304 00:48:11 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:41.304 00:48:11 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:41.304 00:48:11 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:41.304 00:48:11 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:41.305 00:48:11 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:05:41.305 00:48:11 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:05:41.305 00:48:11 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=1695267 00:05:41.305 00:48:11 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:41.305 00:48:11 -- common/autotest_common.sh@1598 -- # waitforlisten 1695267 00:05:41.305 00:48:11 -- common/autotest_common.sh@831 -- # '[' -z 1695267 ']' 00:05:41.305 00:48:11 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.305 00:48:11 -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.305 00:48:11 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.305 00:48:11 -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.305 00:48:11 -- common/autotest_common.sh@10 -- # set +x 00:05:41.305 [2024-07-26 00:48:11.607736] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:05:41.305 [2024-07-26 00:48:11.607822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1695267 ] 00:05:41.305 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.305 [2024-07-26 00:48:11.670304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.564 [2024-07-26 00:48:11.761389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.823 00:48:12 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:41.823 00:48:12 -- common/autotest_common.sh@864 -- # return 0 00:05:41.823 00:48:12 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:41.823 00:48:12 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:41.823 00:48:12 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:45.111 nvme0n1 00:05:45.111 00:48:15 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:45.111 [2024-07-26 00:48:15.322598] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:45.111 [2024-07-26 00:48:15.322647] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:45.111 request: 00:05:45.111 { 00:05:45.111 "nvme_ctrlr_name": "nvme0", 00:05:45.111 "password": "test", 00:05:45.111 "method": "bdev_nvme_opal_revert", 00:05:45.111 "req_id": 1 00:05:45.111 } 00:05:45.112 Got JSON-RPC error response 00:05:45.112 response: 00:05:45.112 { 00:05:45.112 "code": -32603, 00:05:45.112 "message": "Internal error" 00:05:45.112 } 00:05:45.112 00:48:15 -- common/autotest_common.sh@1604 -- # true 00:05:45.112 00:48:15 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:45.112 00:48:15 -- common/autotest_common.sh@1608 -- # killprocess 1695267 00:05:45.112 00:48:15 -- common/autotest_common.sh@950 -- # '[' -z 1695267 ']' 00:05:45.112 00:48:15 -- common/autotest_common.sh@954 -- # kill -0 1695267 00:05:45.112 00:48:15 -- common/autotest_common.sh@955 -- # uname 00:05:45.112 00:48:15 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:45.112 00:48:15 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1695267 00:05:45.112 00:48:15 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:45.112 00:48:15 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:45.112 00:48:15 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1695267' 00:05:45.112 killing process with pid 1695267 00:05:45.112 00:48:15 -- common/autotest_common.sh@969 -- # kill 1695267 00:05:45.112 00:48:15 -- common/autotest_common.sh@974 -- # wait 1695267 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.112 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:45.113 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:47.020 00:48:17 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:47.020 00:48:17 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:47.020 00:48:17 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:47.020 00:48:17 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:47.020 00:48:17 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:47.020 00:48:17 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:47.020 00:48:17 -- common/autotest_common.sh@10 -- # set +x 00:05:47.020 00:48:17 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:47.020 00:48:17 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:47.020 00:48:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.020 00:48:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.020 00:48:17 -- common/autotest_common.sh@10 -- # set +x 00:05:47.020 ************************************ 00:05:47.020 START TEST env 00:05:47.020 ************************************ 00:05:47.020 00:48:17 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:47.020 * Looking for test storage... 00:05:47.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:47.020 00:48:17 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:47.020 00:48:17 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.020 00:48:17 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.020 00:48:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.020 ************************************ 00:05:47.020 START TEST env_memory 00:05:47.020 ************************************ 00:05:47.020 00:48:17 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:47.020 00:05:47.020 00:05:47.020 CUnit - A unit testing framework for C - Version 2.1-3 00:05:47.020 http://cunit.sourceforge.net/ 00:05:47.020 00:05:47.020 00:05:47.020 Suite: memory 00:05:47.020 Test: alloc and free memory map ...[2024-07-26 00:48:17.328100] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:47.020 passed 00:05:47.020 Test: mem map translation ...[2024-07-26 00:48:17.348737] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:47.020 [2024-07-26 00:48:17.348759] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:47.020 [2024-07-26 00:48:17.348817] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:47.020 [2024-07-26 00:48:17.348834] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:47.020 passed 00:05:47.020 Test: mem map registration ...[2024-07-26 00:48:17.391267] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:47.020 [2024-07-26 00:48:17.391287] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:47.020 passed 00:05:47.282 Test: mem map adjacent registrations ...passed 00:05:47.282 00:05:47.282 Run Summary: Type Total Ran Passed Failed Inactive 00:05:47.282 suites 1 1 n/a 0 0 00:05:47.282 tests 4 4 4 0 0 00:05:47.282 asserts 152 152 152 0 n/a 00:05:47.282 00:05:47.282 Elapsed time = 0.147 seconds 00:05:47.282 00:05:47.283 real 0m0.155s 00:05:47.283 user 0m0.150s 00:05:47.283 sys 0m0.004s 00:05:47.283 00:48:17 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.283 00:48:17 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:47.283 ************************************ 00:05:47.283 END TEST env_memory 00:05:47.283 ************************************ 00:05:47.283 00:48:17 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:47.283 00:48:17 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.283 00:48:17 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.283 00:48:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.283 ************************************ 00:05:47.283 START TEST env_vtophys 00:05:47.283 ************************************ 00:05:47.283 00:48:17 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:47.283 EAL: lib.eal log level changed from notice to debug 00:05:47.283 EAL: Detected lcore 0 as core 0 on socket 0 00:05:47.283 EAL: Detected lcore 1 as core 1 on socket 0 00:05:47.283 EAL: Detected lcore 2 as core 2 on socket 0 00:05:47.283 EAL: Detected lcore 3 as core 3 on socket 0 00:05:47.283 EAL: Detected lcore 4 as core 4 on socket 0 00:05:47.283 EAL: Detected lcore 5 as core 5 on socket 0 00:05:47.283 EAL: Detected lcore 6 as core 8 on socket 0 00:05:47.283 EAL: Detected lcore 7 as core 9 on socket 0 00:05:47.283 EAL: Detected lcore 8 as core 10 on socket 0 00:05:47.283 EAL: Detected lcore 9 as core 11 on socket 0 00:05:47.283 EAL: Detected lcore 10 as core 12 on socket 0 00:05:47.283 EAL: Detected lcore 11 as core 13 on socket 0 00:05:47.283 EAL: Detected lcore 12 as core 0 on socket 1 00:05:47.283 EAL: Detected lcore 13 as core 1 on socket 1 00:05:47.283 EAL: Detected lcore 14 as core 2 on socket 1 00:05:47.283 EAL: Detected lcore 15 as core 3 on socket 1 00:05:47.283 EAL: Detected lcore 16 as core 4 on socket 1 00:05:47.283 EAL: Detected lcore 17 as core 5 on socket 1 00:05:47.283 EAL: Detected lcore 18 as core 8 on socket 1 00:05:47.283 EAL: Detected lcore 19 as core 9 on socket 1 00:05:47.283 EAL: Detected lcore 20 as core 10 on socket 1 00:05:47.283 EAL: Detected lcore 21 as core 11 on socket 1 00:05:47.283 EAL: Detected lcore 22 as core 12 on socket 1 00:05:47.283 EAL: Detected lcore 23 as core 13 on socket 1 00:05:47.283 EAL: Detected lcore 24 as core 0 on socket 0 00:05:47.283 EAL: Detected lcore 25 as core 1 on socket 0 00:05:47.283 EAL: Detected lcore 26 as core 2 on socket 0 00:05:47.283 EAL: Detected lcore 27 as core 3 on socket 0 00:05:47.283 EAL: Detected lcore 28 as core 4 on socket 0 00:05:47.283 EAL: Detected lcore 29 as core 5 on socket 0 00:05:47.283 EAL: Detected lcore 30 as core 8 on socket 0 00:05:47.283 EAL: Detected lcore 31 as core 9 on socket 0 00:05:47.283 EAL: Detected lcore 32 as core 10 on socket 0 00:05:47.283 EAL: Detected lcore 33 as core 11 on socket 0 00:05:47.283 EAL: Detected lcore 34 as core 12 on socket 0 00:05:47.283 EAL: Detected lcore 35 as core 13 on socket 0 00:05:47.283 EAL: Detected lcore 36 as core 0 on socket 1 00:05:47.283 EAL: Detected lcore 37 as core 1 on socket 1 00:05:47.283 EAL: Detected lcore 38 as core 2 on socket 1 00:05:47.283 EAL: Detected lcore 39 as core 3 on socket 1 00:05:47.283 EAL: Detected lcore 40 as core 4 on socket 1 00:05:47.283 EAL: Detected lcore 41 as core 5 on socket 1 00:05:47.283 EAL: Detected lcore 42 as core 8 on socket 1 00:05:47.283 EAL: Detected lcore 43 as core 9 on socket 1 00:05:47.283 EAL: Detected lcore 44 as core 10 on socket 1 00:05:47.283 EAL: Detected lcore 45 as core 11 on socket 1 00:05:47.283 EAL: Detected lcore 46 as core 12 on socket 1 00:05:47.283 EAL: Detected lcore 47 as core 13 on socket 1 00:05:47.283 EAL: Maximum logical cores by configuration: 128 00:05:47.283 EAL: Detected CPU lcores: 48 00:05:47.283 EAL: Detected NUMA nodes: 2 00:05:47.283 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:47.283 EAL: Detected shared linkage of DPDK 00:05:47.283 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:47.283 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:47.283 EAL: Registered [vdev] bus. 00:05:47.283 EAL: bus.vdev log level changed from disabled to notice 00:05:47.283 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:47.283 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:47.283 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:47.283 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:47.283 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:47.283 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:47.283 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:47.283 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:47.283 EAL: No shared files mode enabled, IPC will be disabled 00:05:47.283 EAL: No shared files mode enabled, IPC is disabled 00:05:47.283 EAL: Bus pci wants IOVA as 'DC' 00:05:47.283 EAL: Bus vdev wants IOVA as 'DC' 00:05:47.283 EAL: Buses did not request a specific IOVA mode. 00:05:47.283 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:47.283 EAL: Selected IOVA mode 'VA' 00:05:47.283 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.283 EAL: Probing VFIO support... 00:05:47.283 EAL: IOMMU type 1 (Type 1) is supported 00:05:47.283 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:47.283 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:47.283 EAL: VFIO support initialized 00:05:47.283 EAL: Ask a virtual area of 0x2e000 bytes 00:05:47.283 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:47.283 EAL: Setting up physically contiguous memory... 00:05:47.283 EAL: Setting maximum number of open files to 524288 00:05:47.283 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:47.283 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:47.283 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:47.283 EAL: Ask a virtual area of 0x61000 bytes 00:05:47.283 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:47.283 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:47.283 EAL: Ask a virtual area of 0x400000000 bytes 00:05:47.283 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:47.283 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:47.283 EAL: Ask a virtual area of 0x61000 bytes 00:05:47.283 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:47.283 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:47.283 EAL: Ask a virtual area of 0x400000000 bytes 00:05:47.283 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:47.283 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:47.283 EAL: Ask a virtual area of 0x61000 bytes 00:05:47.283 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:47.283 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:47.283 EAL: Ask a virtual area of 0x400000000 bytes 00:05:47.283 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:47.283 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:47.283 EAL: Ask a virtual area of 0x61000 bytes 00:05:47.283 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:47.283 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:47.283 EAL: Ask a virtual area of 0x400000000 bytes 00:05:47.283 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:47.283 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:47.283 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:47.283 EAL: Ask a virtual area of 0x61000 bytes 00:05:47.283 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:47.283 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:47.283 EAL: Ask a virtual area of 0x400000000 bytes 00:05:47.283 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:47.283 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:47.283 EAL: Ask a virtual area of 0x61000 bytes 00:05:47.283 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:47.283 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:47.283 EAL: Ask a virtual area of 0x400000000 bytes 00:05:47.283 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:47.283 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:47.283 EAL: Ask a virtual area of 0x61000 bytes 00:05:47.283 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:47.283 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:47.283 EAL: Ask a virtual area of 0x400000000 bytes 00:05:47.283 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:47.283 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:47.283 EAL: Ask a virtual area of 0x61000 bytes 00:05:47.283 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:47.283 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:47.283 EAL: Ask a virtual area of 0x400000000 bytes 00:05:47.283 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:47.283 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:47.283 EAL: Hugepages will be freed exactly as allocated. 00:05:47.283 EAL: No shared files mode enabled, IPC is disabled 00:05:47.283 EAL: No shared files mode enabled, IPC is disabled 00:05:47.283 EAL: TSC frequency is ~2700000 KHz 00:05:47.283 EAL: Main lcore 0 is ready (tid=7fe05ab40a00;cpuset=[0]) 00:05:47.283 EAL: Trying to obtain current memory policy. 00:05:47.283 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.283 EAL: Restoring previous memory policy: 0 00:05:47.283 EAL: request: mp_malloc_sync 00:05:47.283 EAL: No shared files mode enabled, IPC is disabled 00:05:47.283 EAL: Heap on socket 0 was expanded by 2MB 00:05:47.283 EAL: No shared files mode enabled, IPC is disabled 00:05:47.283 EAL: No shared files mode enabled, IPC is disabled 00:05:47.283 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:47.283 EAL: Mem event callback 'spdk:(nil)' registered 00:05:47.283 00:05:47.283 00:05:47.283 CUnit - A unit testing framework for C - Version 2.1-3 00:05:47.283 http://cunit.sourceforge.net/ 00:05:47.283 00:05:47.283 00:05:47.283 Suite: components_suite 00:05:47.284 Test: vtophys_malloc_test ...passed 00:05:47.284 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:47.284 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.284 EAL: Restoring previous memory policy: 4 00:05:47.284 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.284 EAL: request: mp_malloc_sync 00:05:47.284 EAL: No shared files mode enabled, IPC is disabled 00:05:47.284 EAL: Heap on socket 0 was expanded by 4MB 00:05:47.284 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.284 EAL: request: mp_malloc_sync 00:05:47.284 EAL: No shared files mode enabled, IPC is disabled 00:05:47.284 EAL: Heap on socket 0 was shrunk by 4MB 00:05:47.284 EAL: Trying to obtain current memory policy. 00:05:47.284 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.284 EAL: Restoring previous memory policy: 4 00:05:47.284 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.284 EAL: request: mp_malloc_sync 00:05:47.284 EAL: No shared files mode enabled, IPC is disabled 00:05:47.284 EAL: Heap on socket 0 was expanded by 6MB 00:05:47.284 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.284 EAL: request: mp_malloc_sync 00:05:47.284 EAL: No shared files mode enabled, IPC is disabled 00:05:47.284 EAL: Heap on socket 0 was shrunk by 6MB 00:05:47.284 EAL: Trying to obtain current memory policy. 00:05:47.284 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.284 EAL: Restoring previous memory policy: 4 00:05:47.284 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.284 EAL: request: mp_malloc_sync 00:05:47.284 EAL: No shared files mode enabled, IPC is disabled 00:05:47.284 EAL: Heap on socket 0 was expanded by 10MB 00:05:47.284 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.284 EAL: request: mp_malloc_sync 00:05:47.284 EAL: No shared files mode enabled, IPC is disabled 00:05:47.284 EAL: Heap on socket 0 was shrunk by 10MB 00:05:47.284 EAL: Trying to obtain current memory policy. 00:05:47.284 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.284 EAL: Restoring previous memory policy: 4 00:05:47.284 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.284 EAL: request: mp_malloc_sync 00:05:47.284 EAL: No shared files mode enabled, IPC is disabled 00:05:47.284 EAL: Heap on socket 0 was expanded by 18MB 00:05:47.284 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.284 EAL: request: mp_malloc_sync 00:05:47.284 EAL: No shared files mode enabled, IPC is disabled 00:05:47.284 EAL: Heap on socket 0 was shrunk by 18MB 00:05:47.284 EAL: Trying to obtain current memory policy. 00:05:47.284 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.284 EAL: Restoring previous memory policy: 4 00:05:47.284 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.284 EAL: request: mp_malloc_sync 00:05:47.284 EAL: No shared files mode enabled, IPC is disabled 00:05:47.284 EAL: Heap on socket 0 was expanded by 34MB 00:05:47.284 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.284 EAL: request: mp_malloc_sync 00:05:47.284 EAL: No shared files mode enabled, IPC is disabled 00:05:47.284 EAL: Heap on socket 0 was shrunk by 34MB 00:05:47.284 EAL: Trying to obtain current memory policy. 00:05:47.284 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.284 EAL: Restoring previous memory policy: 4 00:05:47.284 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.284 EAL: request: mp_malloc_sync 00:05:47.284 EAL: No shared files mode enabled, IPC is disabled 00:05:47.284 EAL: Heap on socket 0 was expanded by 66MB 00:05:47.284 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.284 EAL: request: mp_malloc_sync 00:05:47.284 EAL: No shared files mode enabled, IPC is disabled 00:05:47.284 EAL: Heap on socket 0 was shrunk by 66MB 00:05:47.284 EAL: Trying to obtain current memory policy. 00:05:47.284 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.284 EAL: Restoring previous memory policy: 4 00:05:47.284 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.284 EAL: request: mp_malloc_sync 00:05:47.284 EAL: No shared files mode enabled, IPC is disabled 00:05:47.284 EAL: Heap on socket 0 was expanded by 130MB 00:05:47.284 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.550 EAL: request: mp_malloc_sync 00:05:47.550 EAL: No shared files mode enabled, IPC is disabled 00:05:47.550 EAL: Heap on socket 0 was shrunk by 130MB 00:05:47.550 EAL: Trying to obtain current memory policy. 00:05:47.550 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.550 EAL: Restoring previous memory policy: 4 00:05:47.550 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.550 EAL: request: mp_malloc_sync 00:05:47.550 EAL: No shared files mode enabled, IPC is disabled 00:05:47.550 EAL: Heap on socket 0 was expanded by 258MB 00:05:47.550 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.550 EAL: request: mp_malloc_sync 00:05:47.550 EAL: No shared files mode enabled, IPC is disabled 00:05:47.550 EAL: Heap on socket 0 was shrunk by 258MB 00:05:47.550 EAL: Trying to obtain current memory policy. 00:05:47.550 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.809 EAL: Restoring previous memory policy: 4 00:05:47.809 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.809 EAL: request: mp_malloc_sync 00:05:47.809 EAL: No shared files mode enabled, IPC is disabled 00:05:47.809 EAL: Heap on socket 0 was expanded by 514MB 00:05:47.809 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.067 EAL: request: mp_malloc_sync 00:05:48.067 EAL: No shared files mode enabled, IPC is disabled 00:05:48.067 EAL: Heap on socket 0 was shrunk by 514MB 00:05:48.067 EAL: Trying to obtain current memory policy. 00:05:48.067 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.326 EAL: Restoring previous memory policy: 4 00:05:48.326 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.326 EAL: request: mp_malloc_sync 00:05:48.326 EAL: No shared files mode enabled, IPC is disabled 00:05:48.326 EAL: Heap on socket 0 was expanded by 1026MB 00:05:48.326 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.586 EAL: request: mp_malloc_sync 00:05:48.586 EAL: No shared files mode enabled, IPC is disabled 00:05:48.586 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:48.586 passed 00:05:48.586 00:05:48.586 Run Summary: Type Total Ran Passed Failed Inactive 00:05:48.586 suites 1 1 n/a 0 0 00:05:48.586 tests 2 2 2 0 0 00:05:48.586 asserts 497 497 497 0 n/a 00:05:48.586 00:05:48.586 Elapsed time = 1.350 seconds 00:05:48.586 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.586 EAL: request: mp_malloc_sync 00:05:48.586 EAL: No shared files mode enabled, IPC is disabled 00:05:48.586 EAL: Heap on socket 0 was shrunk by 2MB 00:05:48.586 EAL: No shared files mode enabled, IPC is disabled 00:05:48.586 EAL: No shared files mode enabled, IPC is disabled 00:05:48.587 EAL: No shared files mode enabled, IPC is disabled 00:05:48.587 00:05:48.587 real 0m1.462s 00:05:48.587 user 0m0.836s 00:05:48.587 sys 0m0.592s 00:05:48.587 00:48:18 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.587 00:48:18 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:48.587 ************************************ 00:05:48.587 END TEST env_vtophys 00:05:48.587 ************************************ 00:05:48.587 00:48:18 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:48.587 00:48:18 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.587 00:48:18 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.587 00:48:18 env -- common/autotest_common.sh@10 -- # set +x 00:05:48.587 ************************************ 00:05:48.587 START TEST env_pci 00:05:48.587 ************************************ 00:05:48.587 00:48:18 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:48.587 00:05:48.587 00:05:48.587 CUnit - A unit testing framework for C - Version 2.1-3 00:05:48.587 http://cunit.sourceforge.net/ 00:05:48.587 00:05:48.587 00:05:48.587 Suite: pci 00:05:48.587 Test: pci_hook ...[2024-07-26 00:48:19.003291] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1696165 has claimed it 00:05:48.845 EAL: Cannot find device (10000:00:01.0) 00:05:48.845 EAL: Failed to attach device on primary process 00:05:48.845 passed 00:05:48.845 00:05:48.845 Run Summary: Type Total Ran Passed Failed Inactive 00:05:48.845 suites 1 1 n/a 0 0 00:05:48.845 tests 1 1 1 0 0 00:05:48.845 asserts 25 25 25 0 n/a 00:05:48.845 00:05:48.845 Elapsed time = 0.021 seconds 00:05:48.845 00:05:48.845 real 0m0.031s 00:05:48.845 user 0m0.010s 00:05:48.845 sys 0m0.021s 00:05:48.845 00:48:19 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.845 00:48:19 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:48.845 ************************************ 00:05:48.845 END TEST env_pci 00:05:48.845 ************************************ 00:05:48.845 00:48:19 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:48.845 00:48:19 env -- env/env.sh@15 -- # uname 00:05:48.845 00:48:19 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:48.845 00:48:19 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:48.845 00:48:19 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:48.845 00:48:19 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:48.845 00:48:19 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.845 00:48:19 env -- common/autotest_common.sh@10 -- # set +x 00:05:48.845 ************************************ 00:05:48.845 START TEST env_dpdk_post_init 00:05:48.845 ************************************ 00:05:48.845 00:48:19 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:48.845 EAL: Detected CPU lcores: 48 00:05:48.845 EAL: Detected NUMA nodes: 2 00:05:48.845 EAL: Detected shared linkage of DPDK 00:05:48.845 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:48.845 EAL: Selected IOVA mode 'VA' 00:05:48.845 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.845 EAL: VFIO support initialized 00:05:48.845 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:48.845 EAL: Using IOMMU type 1 (Type 1) 00:05:48.845 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:48.845 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:48.845 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:48.845 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:48.845 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:48.846 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:48.846 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:48.846 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:49.105 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:49.105 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:49.105 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:49.105 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:49.105 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:49.105 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:49.105 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:49.105 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:50.045 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:53.330 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:53.330 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:53.330 Starting DPDK initialization... 00:05:53.330 Starting SPDK post initialization... 00:05:53.330 SPDK NVMe probe 00:05:53.330 Attaching to 0000:88:00.0 00:05:53.330 Attached to 0000:88:00.0 00:05:53.330 Cleaning up... 00:05:53.330 00:05:53.330 real 0m4.382s 00:05:53.330 user 0m3.255s 00:05:53.330 sys 0m0.188s 00:05:53.330 00:48:23 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.330 00:48:23 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:53.330 ************************************ 00:05:53.330 END TEST env_dpdk_post_init 00:05:53.330 ************************************ 00:05:53.330 00:48:23 env -- env/env.sh@26 -- # uname 00:05:53.330 00:48:23 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:53.330 00:48:23 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:53.330 00:48:23 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.330 00:48:23 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.330 00:48:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:53.330 ************************************ 00:05:53.330 START TEST env_mem_callbacks 00:05:53.331 ************************************ 00:05:53.331 00:48:23 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:53.331 EAL: Detected CPU lcores: 48 00:05:53.331 EAL: Detected NUMA nodes: 2 00:05:53.331 EAL: Detected shared linkage of DPDK 00:05:53.331 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:53.331 EAL: Selected IOVA mode 'VA' 00:05:53.331 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.331 EAL: VFIO support initialized 00:05:53.331 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:53.331 00:05:53.331 00:05:53.331 CUnit - A unit testing framework for C - Version 2.1-3 00:05:53.331 http://cunit.sourceforge.net/ 00:05:53.331 00:05:53.331 00:05:53.331 Suite: memory 00:05:53.331 Test: test ... 00:05:53.331 register 0x200000200000 2097152 00:05:53.331 malloc 3145728 00:05:53.331 register 0x200000400000 4194304 00:05:53.331 buf 0x200000500000 len 3145728 PASSED 00:05:53.331 malloc 64 00:05:53.331 buf 0x2000004fff40 len 64 PASSED 00:05:53.331 malloc 4194304 00:05:53.331 register 0x200000800000 6291456 00:05:53.331 buf 0x200000a00000 len 4194304 PASSED 00:05:53.331 free 0x200000500000 3145728 00:05:53.331 free 0x2000004fff40 64 00:05:53.331 unregister 0x200000400000 4194304 PASSED 00:05:53.331 free 0x200000a00000 4194304 00:05:53.331 unregister 0x200000800000 6291456 PASSED 00:05:53.331 malloc 8388608 00:05:53.331 register 0x200000400000 10485760 00:05:53.331 buf 0x200000600000 len 8388608 PASSED 00:05:53.331 free 0x200000600000 8388608 00:05:53.331 unregister 0x200000400000 10485760 PASSED 00:05:53.331 passed 00:05:53.331 00:05:53.331 Run Summary: Type Total Ran Passed Failed Inactive 00:05:53.331 suites 1 1 n/a 0 0 00:05:53.331 tests 1 1 1 0 0 00:05:53.331 asserts 15 15 15 0 n/a 00:05:53.331 00:05:53.331 Elapsed time = 0.005 seconds 00:05:53.331 00:05:53.331 real 0m0.049s 00:05:53.331 user 0m0.011s 00:05:53.331 sys 0m0.037s 00:05:53.331 00:48:23 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.331 00:48:23 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:53.331 ************************************ 00:05:53.331 END TEST env_mem_callbacks 00:05:53.331 ************************************ 00:05:53.331 00:05:53.331 real 0m6.361s 00:05:53.331 user 0m4.372s 00:05:53.331 sys 0m1.032s 00:05:53.331 00:48:23 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.331 00:48:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:53.331 ************************************ 00:05:53.331 END TEST env 00:05:53.331 ************************************ 00:05:53.331 00:48:23 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:53.331 00:48:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.331 00:48:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.331 00:48:23 -- common/autotest_common.sh@10 -- # set +x 00:05:53.331 ************************************ 00:05:53.331 START TEST rpc 00:05:53.331 ************************************ 00:05:53.331 00:48:23 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:53.331 * Looking for test storage... 00:05:53.331 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:53.331 00:48:23 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1696816 00:05:53.331 00:48:23 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:53.331 00:48:23 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.331 00:48:23 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1696816 00:05:53.331 00:48:23 rpc -- common/autotest_common.sh@831 -- # '[' -z 1696816 ']' 00:05:53.331 00:48:23 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.331 00:48:23 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.331 00:48:23 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.331 00:48:23 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.331 00:48:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.331 [2024-07-26 00:48:23.725670] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:05:53.331 [2024-07-26 00:48:23.725749] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1696816 ] 00:05:53.331 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.589 [2024-07-26 00:48:23.783195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.589 [2024-07-26 00:48:23.869999] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:53.589 [2024-07-26 00:48:23.870090] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1696816' to capture a snapshot of events at runtime. 00:05:53.589 [2024-07-26 00:48:23.870106] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:53.589 [2024-07-26 00:48:23.870118] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:53.589 [2024-07-26 00:48:23.870127] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1696816 for offline analysis/debug. 00:05:53.589 [2024-07-26 00:48:23.870160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.850 00:48:24 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.850 00:48:24 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:53.850 00:48:24 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:53.850 00:48:24 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:53.850 00:48:24 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:53.850 00:48:24 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:53.850 00:48:24 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.850 00:48:24 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.850 00:48:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.850 ************************************ 00:05:53.850 START TEST rpc_integrity 00:05:53.850 ************************************ 00:05:53.850 00:48:24 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:53.850 00:48:24 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:53.850 00:48:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.850 00:48:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.850 00:48:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.850 00:48:24 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:53.850 00:48:24 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:53.850 00:48:24 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:53.850 00:48:24 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:53.850 00:48:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.850 00:48:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.850 00:48:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.850 00:48:24 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:53.850 00:48:24 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:53.850 00:48:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.850 00:48:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.850 00:48:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.850 00:48:24 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:53.850 { 00:05:53.850 "name": "Malloc0", 00:05:53.850 "aliases": [ 00:05:53.850 "9e8e66fb-a607-4e60-b7de-0fcbdb73c981" 00:05:53.850 ], 00:05:53.850 "product_name": "Malloc disk", 00:05:53.850 "block_size": 512, 00:05:53.850 "num_blocks": 16384, 00:05:53.850 "uuid": "9e8e66fb-a607-4e60-b7de-0fcbdb73c981", 00:05:53.850 "assigned_rate_limits": { 00:05:53.850 "rw_ios_per_sec": 0, 00:05:53.850 "rw_mbytes_per_sec": 0, 00:05:53.850 "r_mbytes_per_sec": 0, 00:05:53.850 "w_mbytes_per_sec": 0 00:05:53.850 }, 00:05:53.850 "claimed": false, 00:05:53.850 "zoned": false, 00:05:53.850 "supported_io_types": { 00:05:53.850 "read": true, 00:05:53.850 "write": true, 00:05:53.850 "unmap": true, 00:05:53.850 "flush": true, 00:05:53.850 "reset": true, 00:05:53.850 "nvme_admin": false, 00:05:53.850 "nvme_io": false, 00:05:53.850 "nvme_io_md": false, 00:05:53.850 "write_zeroes": true, 00:05:53.850 "zcopy": true, 00:05:53.850 "get_zone_info": false, 00:05:53.850 "zone_management": false, 00:05:53.850 "zone_append": false, 00:05:53.850 "compare": false, 00:05:53.850 "compare_and_write": false, 00:05:53.850 "abort": true, 00:05:53.850 "seek_hole": false, 00:05:53.850 "seek_data": false, 00:05:53.850 "copy": true, 00:05:53.850 "nvme_iov_md": false 00:05:53.850 }, 00:05:53.850 "memory_domains": [ 00:05:53.850 { 00:05:53.850 "dma_device_id": "system", 00:05:53.850 "dma_device_type": 1 00:05:53.850 }, 00:05:53.850 { 00:05:53.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:53.850 "dma_device_type": 2 00:05:53.850 } 00:05:53.850 ], 00:05:53.850 "driver_specific": {} 00:05:53.850 } 00:05:53.850 ]' 00:05:53.850 00:48:24 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:53.850 00:48:24 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:53.850 00:48:24 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:53.850 00:48:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.850 00:48:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.850 [2024-07-26 00:48:24.262766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:53.850 [2024-07-26 00:48:24.262814] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:53.850 [2024-07-26 00:48:24.262838] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2005af0 00:05:53.850 [2024-07-26 00:48:24.262853] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:53.850 [2024-07-26 00:48:24.264375] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:53.850 [2024-07-26 00:48:24.264405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:53.850 Passthru0 00:05:53.850 00:48:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.850 00:48:24 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:53.850 00:48:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.850 00:48:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.110 00:48:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.110 00:48:24 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:54.110 { 00:05:54.110 "name": "Malloc0", 00:05:54.110 "aliases": [ 00:05:54.110 "9e8e66fb-a607-4e60-b7de-0fcbdb73c981" 00:05:54.110 ], 00:05:54.110 "product_name": "Malloc disk", 00:05:54.110 "block_size": 512, 00:05:54.110 "num_blocks": 16384, 00:05:54.110 "uuid": "9e8e66fb-a607-4e60-b7de-0fcbdb73c981", 00:05:54.110 "assigned_rate_limits": { 00:05:54.110 "rw_ios_per_sec": 0, 00:05:54.110 "rw_mbytes_per_sec": 0, 00:05:54.110 "r_mbytes_per_sec": 0, 00:05:54.110 "w_mbytes_per_sec": 0 00:05:54.110 }, 00:05:54.110 "claimed": true, 00:05:54.110 "claim_type": "exclusive_write", 00:05:54.110 "zoned": false, 00:05:54.110 "supported_io_types": { 00:05:54.110 "read": true, 00:05:54.110 "write": true, 00:05:54.110 "unmap": true, 00:05:54.110 "flush": true, 00:05:54.110 "reset": true, 00:05:54.110 "nvme_admin": false, 00:05:54.110 "nvme_io": false, 00:05:54.110 "nvme_io_md": false, 00:05:54.110 "write_zeroes": true, 00:05:54.110 "zcopy": true, 00:05:54.110 "get_zone_info": false, 00:05:54.110 "zone_management": false, 00:05:54.110 "zone_append": false, 00:05:54.110 "compare": false, 00:05:54.110 "compare_and_write": false, 00:05:54.110 "abort": true, 00:05:54.110 "seek_hole": false, 00:05:54.110 "seek_data": false, 00:05:54.110 "copy": true, 00:05:54.110 "nvme_iov_md": false 00:05:54.110 }, 00:05:54.110 "memory_domains": [ 00:05:54.110 { 00:05:54.110 "dma_device_id": "system", 00:05:54.110 "dma_device_type": 1 00:05:54.110 }, 00:05:54.110 { 00:05:54.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.110 "dma_device_type": 2 00:05:54.110 } 00:05:54.110 ], 00:05:54.110 "driver_specific": {} 00:05:54.110 }, 00:05:54.110 { 00:05:54.110 "name": "Passthru0", 00:05:54.110 "aliases": [ 00:05:54.110 "9cf1923e-adb9-5929-a36e-c1025288fe93" 00:05:54.110 ], 00:05:54.110 "product_name": "passthru", 00:05:54.110 "block_size": 512, 00:05:54.110 "num_blocks": 16384, 00:05:54.110 "uuid": "9cf1923e-adb9-5929-a36e-c1025288fe93", 00:05:54.110 "assigned_rate_limits": { 00:05:54.110 "rw_ios_per_sec": 0, 00:05:54.110 "rw_mbytes_per_sec": 0, 00:05:54.110 "r_mbytes_per_sec": 0, 00:05:54.110 "w_mbytes_per_sec": 0 00:05:54.110 }, 00:05:54.110 "claimed": false, 00:05:54.110 "zoned": false, 00:05:54.110 "supported_io_types": { 00:05:54.110 "read": true, 00:05:54.110 "write": true, 00:05:54.110 "unmap": true, 00:05:54.110 "flush": true, 00:05:54.110 "reset": true, 00:05:54.110 "nvme_admin": false, 00:05:54.110 "nvme_io": false, 00:05:54.110 "nvme_io_md": false, 00:05:54.110 "write_zeroes": true, 00:05:54.110 "zcopy": true, 00:05:54.110 "get_zone_info": false, 00:05:54.110 "zone_management": false, 00:05:54.110 "zone_append": false, 00:05:54.110 "compare": false, 00:05:54.110 "compare_and_write": false, 00:05:54.110 "abort": true, 00:05:54.110 "seek_hole": false, 00:05:54.110 "seek_data": false, 00:05:54.110 "copy": true, 00:05:54.110 "nvme_iov_md": false 00:05:54.110 }, 00:05:54.110 "memory_domains": [ 00:05:54.110 { 00:05:54.110 "dma_device_id": "system", 00:05:54.110 "dma_device_type": 1 00:05:54.110 }, 00:05:54.110 { 00:05:54.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.110 "dma_device_type": 2 00:05:54.110 } 00:05:54.110 ], 00:05:54.110 "driver_specific": { 00:05:54.110 "passthru": { 00:05:54.110 "name": "Passthru0", 00:05:54.110 "base_bdev_name": "Malloc0" 00:05:54.110 } 00:05:54.110 } 00:05:54.110 } 00:05:54.110 ]' 00:05:54.110 00:48:24 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:54.110 00:48:24 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:54.110 00:48:24 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:54.110 00:48:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.110 00:48:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.110 00:48:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.110 00:48:24 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:54.110 00:48:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.110 00:48:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.110 00:48:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.110 00:48:24 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:54.110 00:48:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.110 00:48:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.110 00:48:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.110 00:48:24 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:54.110 00:48:24 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:54.110 00:48:24 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:54.110 00:05:54.110 real 0m0.234s 00:05:54.110 user 0m0.155s 00:05:54.110 sys 0m0.023s 00:05:54.110 00:48:24 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.110 00:48:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.110 ************************************ 00:05:54.110 END TEST rpc_integrity 00:05:54.110 ************************************ 00:05:54.110 00:48:24 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:54.110 00:48:24 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.110 00:48:24 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.110 00:48:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.111 ************************************ 00:05:54.111 START TEST rpc_plugins 00:05:54.111 ************************************ 00:05:54.111 00:48:24 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:54.111 00:48:24 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:54.111 00:48:24 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.111 00:48:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:54.111 00:48:24 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.111 00:48:24 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:54.111 00:48:24 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:54.111 00:48:24 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.111 00:48:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:54.111 00:48:24 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.111 00:48:24 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:54.111 { 00:05:54.111 "name": "Malloc1", 00:05:54.111 "aliases": [ 00:05:54.111 "d798e676-42f0-424a-aaeb-982dc9666338" 00:05:54.111 ], 00:05:54.111 "product_name": "Malloc disk", 00:05:54.111 "block_size": 4096, 00:05:54.111 "num_blocks": 256, 00:05:54.111 "uuid": "d798e676-42f0-424a-aaeb-982dc9666338", 00:05:54.111 "assigned_rate_limits": { 00:05:54.111 "rw_ios_per_sec": 0, 00:05:54.111 "rw_mbytes_per_sec": 0, 00:05:54.111 "r_mbytes_per_sec": 0, 00:05:54.111 "w_mbytes_per_sec": 0 00:05:54.111 }, 00:05:54.111 "claimed": false, 00:05:54.111 "zoned": false, 00:05:54.111 "supported_io_types": { 00:05:54.111 "read": true, 00:05:54.111 "write": true, 00:05:54.111 "unmap": true, 00:05:54.111 "flush": true, 00:05:54.111 "reset": true, 00:05:54.111 "nvme_admin": false, 00:05:54.111 "nvme_io": false, 00:05:54.111 "nvme_io_md": false, 00:05:54.111 "write_zeroes": true, 00:05:54.111 "zcopy": true, 00:05:54.111 "get_zone_info": false, 00:05:54.111 "zone_management": false, 00:05:54.111 "zone_append": false, 00:05:54.111 "compare": false, 00:05:54.111 "compare_and_write": false, 00:05:54.111 "abort": true, 00:05:54.111 "seek_hole": false, 00:05:54.111 "seek_data": false, 00:05:54.111 "copy": true, 00:05:54.111 "nvme_iov_md": false 00:05:54.111 }, 00:05:54.111 "memory_domains": [ 00:05:54.111 { 00:05:54.111 "dma_device_id": "system", 00:05:54.111 "dma_device_type": 1 00:05:54.111 }, 00:05:54.111 { 00:05:54.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.111 "dma_device_type": 2 00:05:54.111 } 00:05:54.111 ], 00:05:54.111 "driver_specific": {} 00:05:54.111 } 00:05:54.111 ]' 00:05:54.111 00:48:24 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:54.111 00:48:24 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:54.111 00:48:24 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:54.111 00:48:24 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.111 00:48:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:54.111 00:48:24 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.111 00:48:24 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:54.111 00:48:24 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.111 00:48:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:54.111 00:48:24 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.111 00:48:24 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:54.111 00:48:24 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:54.111 00:48:24 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:54.111 00:05:54.111 real 0m0.108s 00:05:54.111 user 0m0.074s 00:05:54.111 sys 0m0.009s 00:05:54.111 00:48:24 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.111 00:48:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:54.111 ************************************ 00:05:54.111 END TEST rpc_plugins 00:05:54.111 ************************************ 00:05:54.369 00:48:24 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:54.369 00:48:24 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.369 00:48:24 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.369 00:48:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.369 ************************************ 00:05:54.370 START TEST rpc_trace_cmd_test 00:05:54.370 ************************************ 00:05:54.370 00:48:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:54.370 00:48:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:54.370 00:48:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:54.370 00:48:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.370 00:48:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:54.370 00:48:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.370 00:48:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:54.370 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1696816", 00:05:54.370 "tpoint_group_mask": "0x8", 00:05:54.370 "iscsi_conn": { 00:05:54.370 "mask": "0x2", 00:05:54.370 "tpoint_mask": "0x0" 00:05:54.370 }, 00:05:54.370 "scsi": { 00:05:54.370 "mask": "0x4", 00:05:54.370 "tpoint_mask": "0x0" 00:05:54.370 }, 00:05:54.370 "bdev": { 00:05:54.370 "mask": "0x8", 00:05:54.370 "tpoint_mask": "0xffffffffffffffff" 00:05:54.370 }, 00:05:54.370 "nvmf_rdma": { 00:05:54.370 "mask": "0x10", 00:05:54.370 "tpoint_mask": "0x0" 00:05:54.370 }, 00:05:54.370 "nvmf_tcp": { 00:05:54.370 "mask": "0x20", 00:05:54.370 "tpoint_mask": "0x0" 00:05:54.370 }, 00:05:54.370 "ftl": { 00:05:54.370 "mask": "0x40", 00:05:54.370 "tpoint_mask": "0x0" 00:05:54.370 }, 00:05:54.370 "blobfs": { 00:05:54.370 "mask": "0x80", 00:05:54.370 "tpoint_mask": "0x0" 00:05:54.370 }, 00:05:54.370 "dsa": { 00:05:54.370 "mask": "0x200", 00:05:54.370 "tpoint_mask": "0x0" 00:05:54.370 }, 00:05:54.370 "thread": { 00:05:54.370 "mask": "0x400", 00:05:54.370 "tpoint_mask": "0x0" 00:05:54.370 }, 00:05:54.370 "nvme_pcie": { 00:05:54.370 "mask": "0x800", 00:05:54.370 "tpoint_mask": "0x0" 00:05:54.370 }, 00:05:54.370 "iaa": { 00:05:54.370 "mask": "0x1000", 00:05:54.370 "tpoint_mask": "0x0" 00:05:54.370 }, 00:05:54.370 "nvme_tcp": { 00:05:54.370 "mask": "0x2000", 00:05:54.370 "tpoint_mask": "0x0" 00:05:54.370 }, 00:05:54.370 "bdev_nvme": { 00:05:54.370 "mask": "0x4000", 00:05:54.370 "tpoint_mask": "0x0" 00:05:54.370 }, 00:05:54.370 "sock": { 00:05:54.370 "mask": "0x8000", 00:05:54.370 "tpoint_mask": "0x0" 00:05:54.370 } 00:05:54.370 }' 00:05:54.370 00:48:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:54.370 00:48:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:54.370 00:48:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:54.370 00:48:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:54.370 00:48:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:54.370 00:48:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:54.370 00:48:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:54.370 00:48:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:54.370 00:48:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:54.370 00:48:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:54.370 00:05:54.370 real 0m0.205s 00:05:54.370 user 0m0.185s 00:05:54.370 sys 0m0.012s 00:05:54.370 00:48:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.370 00:48:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:54.370 ************************************ 00:05:54.370 END TEST rpc_trace_cmd_test 00:05:54.370 ************************************ 00:05:54.629 00:48:24 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:54.629 00:48:24 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:54.629 00:48:24 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:54.629 00:48:24 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.629 00:48:24 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.629 00:48:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.629 ************************************ 00:05:54.629 START TEST rpc_daemon_integrity 00:05:54.629 ************************************ 00:05:54.629 00:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:54.629 00:48:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:54.629 00:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.629 00:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.629 00:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.629 00:48:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:54.629 00:48:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:54.629 00:48:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:54.629 00:48:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:54.629 00:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.629 00:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.629 00:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.629 00:48:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:54.629 00:48:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:54.629 00:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.629 00:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.629 00:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.629 00:48:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:54.629 { 00:05:54.629 "name": "Malloc2", 00:05:54.629 "aliases": [ 00:05:54.629 "36618a30-9c0d-4718-b97b-abd5c86e332a" 00:05:54.629 ], 00:05:54.629 "product_name": "Malloc disk", 00:05:54.629 "block_size": 512, 00:05:54.629 "num_blocks": 16384, 00:05:54.629 "uuid": "36618a30-9c0d-4718-b97b-abd5c86e332a", 00:05:54.629 "assigned_rate_limits": { 00:05:54.629 "rw_ios_per_sec": 0, 00:05:54.629 "rw_mbytes_per_sec": 0, 00:05:54.629 "r_mbytes_per_sec": 0, 00:05:54.629 "w_mbytes_per_sec": 0 00:05:54.629 }, 00:05:54.629 "claimed": false, 00:05:54.629 "zoned": false, 00:05:54.629 "supported_io_types": { 00:05:54.629 "read": true, 00:05:54.629 "write": true, 00:05:54.629 "unmap": true, 00:05:54.629 "flush": true, 00:05:54.629 "reset": true, 00:05:54.629 "nvme_admin": false, 00:05:54.629 "nvme_io": false, 00:05:54.629 "nvme_io_md": false, 00:05:54.629 "write_zeroes": true, 00:05:54.629 "zcopy": true, 00:05:54.629 "get_zone_info": false, 00:05:54.629 "zone_management": false, 00:05:54.629 "zone_append": false, 00:05:54.629 "compare": false, 00:05:54.629 "compare_and_write": false, 00:05:54.629 "abort": true, 00:05:54.629 "seek_hole": false, 00:05:54.629 "seek_data": false, 00:05:54.629 "copy": true, 00:05:54.629 "nvme_iov_md": false 00:05:54.629 }, 00:05:54.629 "memory_domains": [ 00:05:54.629 { 00:05:54.629 "dma_device_id": "system", 00:05:54.629 "dma_device_type": 1 00:05:54.629 }, 00:05:54.629 { 00:05:54.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.629 "dma_device_type": 2 00:05:54.629 } 00:05:54.629 ], 00:05:54.629 "driver_specific": {} 00:05:54.629 } 00:05:54.629 ]' 00:05:54.629 00:48:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:54.629 00:48:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:54.629 00:48:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:54.629 00:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.629 00:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.629 [2024-07-26 00:48:24.944856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:54.629 [2024-07-26 00:48:24.944912] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:54.629 [2024-07-26 00:48:24.944935] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e553d0 00:05:54.629 [2024-07-26 00:48:24.944951] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:54.629 [2024-07-26 00:48:24.946298] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:54.629 [2024-07-26 00:48:24.946323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:54.629 Passthru0 00:05:54.629 00:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.629 00:48:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:54.629 00:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.629 00:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.629 00:48:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.629 00:48:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:54.629 { 00:05:54.629 "name": "Malloc2", 00:05:54.629 "aliases": [ 00:05:54.629 "36618a30-9c0d-4718-b97b-abd5c86e332a" 00:05:54.629 ], 00:05:54.629 "product_name": "Malloc disk", 00:05:54.629 "block_size": 512, 00:05:54.629 "num_blocks": 16384, 00:05:54.629 "uuid": "36618a30-9c0d-4718-b97b-abd5c86e332a", 00:05:54.629 "assigned_rate_limits": { 00:05:54.629 "rw_ios_per_sec": 0, 00:05:54.629 "rw_mbytes_per_sec": 0, 00:05:54.629 "r_mbytes_per_sec": 0, 00:05:54.629 "w_mbytes_per_sec": 0 00:05:54.629 }, 00:05:54.629 "claimed": true, 00:05:54.629 "claim_type": "exclusive_write", 00:05:54.629 "zoned": false, 00:05:54.629 "supported_io_types": { 00:05:54.629 "read": true, 00:05:54.629 "write": true, 00:05:54.629 "unmap": true, 00:05:54.629 "flush": true, 00:05:54.629 "reset": true, 00:05:54.629 "nvme_admin": false, 00:05:54.629 "nvme_io": false, 00:05:54.629 "nvme_io_md": false, 00:05:54.629 "write_zeroes": true, 00:05:54.629 "zcopy": true, 00:05:54.629 "get_zone_info": false, 00:05:54.629 "zone_management": false, 00:05:54.629 "zone_append": false, 00:05:54.629 "compare": false, 00:05:54.629 "compare_and_write": false, 00:05:54.629 "abort": true, 00:05:54.629 "seek_hole": false, 00:05:54.629 "seek_data": false, 00:05:54.629 "copy": true, 00:05:54.629 "nvme_iov_md": false 00:05:54.629 }, 00:05:54.629 "memory_domains": [ 00:05:54.629 { 00:05:54.629 "dma_device_id": "system", 00:05:54.629 "dma_device_type": 1 00:05:54.629 }, 00:05:54.629 { 00:05:54.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.629 "dma_device_type": 2 00:05:54.629 } 00:05:54.629 ], 00:05:54.629 "driver_specific": {} 00:05:54.629 }, 00:05:54.629 { 00:05:54.629 "name": "Passthru0", 00:05:54.629 "aliases": [ 00:05:54.629 "c8020b2d-29d8-5acb-97d4-0e71cbe12659" 00:05:54.629 ], 00:05:54.629 "product_name": "passthru", 00:05:54.629 "block_size": 512, 00:05:54.629 "num_blocks": 16384, 00:05:54.629 "uuid": "c8020b2d-29d8-5acb-97d4-0e71cbe12659", 00:05:54.629 "assigned_rate_limits": { 00:05:54.629 "rw_ios_per_sec": 0, 00:05:54.629 "rw_mbytes_per_sec": 0, 00:05:54.629 "r_mbytes_per_sec": 0, 00:05:54.629 "w_mbytes_per_sec": 0 00:05:54.629 }, 00:05:54.629 "claimed": false, 00:05:54.629 "zoned": false, 00:05:54.629 "supported_io_types": { 00:05:54.629 "read": true, 00:05:54.629 "write": true, 00:05:54.629 "unmap": true, 00:05:54.629 "flush": true, 00:05:54.629 "reset": true, 00:05:54.629 "nvme_admin": false, 00:05:54.629 "nvme_io": false, 00:05:54.629 "nvme_io_md": false, 00:05:54.629 "write_zeroes": true, 00:05:54.629 "zcopy": true, 00:05:54.629 "get_zone_info": false, 00:05:54.629 "zone_management": false, 00:05:54.629 "zone_append": false, 00:05:54.629 "compare": false, 00:05:54.629 "compare_and_write": false, 00:05:54.629 "abort": true, 00:05:54.629 "seek_hole": false, 00:05:54.629 "seek_data": false, 00:05:54.629 "copy": true, 00:05:54.629 "nvme_iov_md": false 00:05:54.629 }, 00:05:54.629 "memory_domains": [ 00:05:54.629 { 00:05:54.629 "dma_device_id": "system", 00:05:54.629 "dma_device_type": 1 00:05:54.629 }, 00:05:54.629 { 00:05:54.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.629 "dma_device_type": 2 00:05:54.629 } 00:05:54.629 ], 00:05:54.629 "driver_specific": { 00:05:54.629 "passthru": { 00:05:54.629 "name": "Passthru0", 00:05:54.629 "base_bdev_name": "Malloc2" 00:05:54.629 } 00:05:54.629 } 00:05:54.629 } 00:05:54.629 ]' 00:05:54.630 00:48:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:54.630 00:48:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:54.630 00:48:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:54.630 00:48:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.630 00:48:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.630 00:48:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.630 00:48:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:54.630 00:48:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.630 00:48:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.630 00:48:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.630 00:48:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:54.630 00:48:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.630 00:48:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.630 00:48:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.630 00:48:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:54.630 00:48:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:54.888 00:48:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:54.888 00:05:54.888 real 0m0.228s 00:05:54.888 user 0m0.153s 00:05:54.888 sys 0m0.020s 00:05:54.888 00:48:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.888 00:48:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.888 ************************************ 00:05:54.888 END TEST rpc_daemon_integrity 00:05:54.888 ************************************ 00:05:54.888 00:48:25 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:54.888 00:48:25 rpc -- rpc/rpc.sh@84 -- # killprocess 1696816 00:05:54.888 00:48:25 rpc -- common/autotest_common.sh@950 -- # '[' -z 1696816 ']' 00:05:54.888 00:48:25 rpc -- common/autotest_common.sh@954 -- # kill -0 1696816 00:05:54.888 00:48:25 rpc -- common/autotest_common.sh@955 -- # uname 00:05:54.888 00:48:25 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:54.888 00:48:25 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1696816 00:05:54.888 00:48:25 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:54.888 00:48:25 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:54.888 00:48:25 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1696816' 00:05:54.888 killing process with pid 1696816 00:05:54.888 00:48:25 rpc -- common/autotest_common.sh@969 -- # kill 1696816 00:05:54.888 00:48:25 rpc -- common/autotest_common.sh@974 -- # wait 1696816 00:05:55.147 00:05:55.147 real 0m1.892s 00:05:55.147 user 0m2.392s 00:05:55.147 sys 0m0.593s 00:05:55.147 00:48:25 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.147 00:48:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.147 ************************************ 00:05:55.147 END TEST rpc 00:05:55.147 ************************************ 00:05:55.147 00:48:25 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:55.147 00:48:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.147 00:48:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.147 00:48:25 -- common/autotest_common.sh@10 -- # set +x 00:05:55.147 ************************************ 00:05:55.147 START TEST skip_rpc 00:05:55.147 ************************************ 00:05:55.147 00:48:25 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:55.406 * Looking for test storage... 00:05:55.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:55.406 00:48:25 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:55.406 00:48:25 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:55.406 00:48:25 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:55.406 00:48:25 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.406 00:48:25 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.406 00:48:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.406 ************************************ 00:05:55.406 START TEST skip_rpc 00:05:55.406 ************************************ 00:05:55.406 00:48:25 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:55.406 00:48:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1697253 00:05:55.406 00:48:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:55.406 00:48:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:55.406 00:48:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:55.406 [2024-07-26 00:48:25.689000] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:05:55.406 [2024-07-26 00:48:25.689087] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1697253 ] 00:05:55.406 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.406 [2024-07-26 00:48:25.747958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.664 [2024-07-26 00:48:25.838037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.940 00:48:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:00.940 00:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:00.940 00:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:00.940 00:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:00.940 00:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:00.940 00:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:00.940 00:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:00.940 00:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:00.940 00:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.940 00:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.940 00:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:00.940 00:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:00.940 00:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:00.940 00:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:00.940 00:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:00.940 00:48:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:00.940 00:48:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1697253 00:06:00.940 00:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 1697253 ']' 00:06:00.940 00:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 1697253 00:06:00.940 00:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:00.940 00:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:00.940 00:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1697253 00:06:00.940 00:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:00.940 00:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:00.940 00:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1697253' 00:06:00.940 killing process with pid 1697253 00:06:00.940 00:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 1697253 00:06:00.940 00:48:30 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 1697253 00:06:00.940 00:06:00.940 real 0m5.433s 00:06:00.940 user 0m5.109s 00:06:00.940 sys 0m0.328s 00:06:00.940 00:48:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.940 00:48:31 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.940 ************************************ 00:06:00.940 END TEST skip_rpc 00:06:00.940 ************************************ 00:06:00.940 00:48:31 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:00.940 00:48:31 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.940 00:48:31 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.940 00:48:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.940 ************************************ 00:06:00.940 START TEST skip_rpc_with_json 00:06:00.940 ************************************ 00:06:00.940 00:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:00.940 00:48:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:00.940 00:48:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1697945 00:06:00.940 00:48:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:00.940 00:48:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:00.940 00:48:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1697945 00:06:00.940 00:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 1697945 ']' 00:06:00.940 00:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.940 00:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.940 00:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.940 00:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.940 00:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:00.940 [2024-07-26 00:48:31.171965] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:06:00.940 [2024-07-26 00:48:31.172057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1697945 ] 00:06:00.940 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.940 [2024-07-26 00:48:31.233861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.940 [2024-07-26 00:48:31.323511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.200 00:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:01.200 00:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:01.200 00:48:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:01.200 00:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.200 00:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:01.200 [2024-07-26 00:48:31.585152] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:01.200 request: 00:06:01.200 { 00:06:01.200 "trtype": "tcp", 00:06:01.200 "method": "nvmf_get_transports", 00:06:01.200 "req_id": 1 00:06:01.200 } 00:06:01.200 Got JSON-RPC error response 00:06:01.200 response: 00:06:01.200 { 00:06:01.200 "code": -19, 00:06:01.200 "message": "No such device" 00:06:01.200 } 00:06:01.200 00:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:01.200 00:48:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:01.200 00:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.200 00:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:01.200 [2024-07-26 00:48:31.593256] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:01.200 00:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.200 00:48:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:01.200 00:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.200 00:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:01.461 00:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.461 00:48:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:01.461 { 00:06:01.461 "subsystems": [ 00:06:01.461 { 00:06:01.461 "subsystem": "vfio_user_target", 00:06:01.461 "config": null 00:06:01.461 }, 00:06:01.461 { 00:06:01.461 "subsystem": "keyring", 00:06:01.461 "config": [] 00:06:01.461 }, 00:06:01.461 { 00:06:01.461 "subsystem": "iobuf", 00:06:01.461 "config": [ 00:06:01.461 { 00:06:01.461 "method": "iobuf_set_options", 00:06:01.461 "params": { 00:06:01.461 "small_pool_count": 8192, 00:06:01.461 "large_pool_count": 1024, 00:06:01.461 "small_bufsize": 8192, 00:06:01.461 "large_bufsize": 135168 00:06:01.461 } 00:06:01.461 } 00:06:01.461 ] 00:06:01.461 }, 00:06:01.461 { 00:06:01.461 "subsystem": "sock", 00:06:01.461 "config": [ 00:06:01.461 { 00:06:01.461 "method": "sock_set_default_impl", 00:06:01.461 "params": { 00:06:01.461 "impl_name": "posix" 00:06:01.461 } 00:06:01.461 }, 00:06:01.461 { 00:06:01.461 "method": "sock_impl_set_options", 00:06:01.461 "params": { 00:06:01.461 "impl_name": "ssl", 00:06:01.461 "recv_buf_size": 4096, 00:06:01.461 "send_buf_size": 4096, 00:06:01.461 "enable_recv_pipe": true, 00:06:01.461 "enable_quickack": false, 00:06:01.461 "enable_placement_id": 0, 00:06:01.461 "enable_zerocopy_send_server": true, 00:06:01.461 "enable_zerocopy_send_client": false, 00:06:01.461 "zerocopy_threshold": 0, 00:06:01.461 "tls_version": 0, 00:06:01.461 "enable_ktls": false 00:06:01.461 } 00:06:01.461 }, 00:06:01.461 { 00:06:01.461 "method": "sock_impl_set_options", 00:06:01.461 "params": { 00:06:01.461 "impl_name": "posix", 00:06:01.461 "recv_buf_size": 2097152, 00:06:01.461 "send_buf_size": 2097152, 00:06:01.461 "enable_recv_pipe": true, 00:06:01.461 "enable_quickack": false, 00:06:01.461 "enable_placement_id": 0, 00:06:01.461 "enable_zerocopy_send_server": true, 00:06:01.461 "enable_zerocopy_send_client": false, 00:06:01.461 "zerocopy_threshold": 0, 00:06:01.461 "tls_version": 0, 00:06:01.461 "enable_ktls": false 00:06:01.461 } 00:06:01.461 } 00:06:01.461 ] 00:06:01.461 }, 00:06:01.461 { 00:06:01.461 "subsystem": "vmd", 00:06:01.461 "config": [] 00:06:01.461 }, 00:06:01.461 { 00:06:01.461 "subsystem": "accel", 00:06:01.461 "config": [ 00:06:01.461 { 00:06:01.461 "method": "accel_set_options", 00:06:01.461 "params": { 00:06:01.461 "small_cache_size": 128, 00:06:01.461 "large_cache_size": 16, 00:06:01.461 "task_count": 2048, 00:06:01.461 "sequence_count": 2048, 00:06:01.461 "buf_count": 2048 00:06:01.461 } 00:06:01.461 } 00:06:01.461 ] 00:06:01.461 }, 00:06:01.461 { 00:06:01.461 "subsystem": "bdev", 00:06:01.461 "config": [ 00:06:01.461 { 00:06:01.461 "method": "bdev_set_options", 00:06:01.461 "params": { 00:06:01.461 "bdev_io_pool_size": 65535, 00:06:01.461 "bdev_io_cache_size": 256, 00:06:01.461 "bdev_auto_examine": true, 00:06:01.461 "iobuf_small_cache_size": 128, 00:06:01.461 "iobuf_large_cache_size": 16 00:06:01.461 } 00:06:01.461 }, 00:06:01.461 { 00:06:01.461 "method": "bdev_raid_set_options", 00:06:01.461 "params": { 00:06:01.461 "process_window_size_kb": 1024, 00:06:01.461 "process_max_bandwidth_mb_sec": 0 00:06:01.461 } 00:06:01.461 }, 00:06:01.461 { 00:06:01.461 "method": "bdev_iscsi_set_options", 00:06:01.461 "params": { 00:06:01.461 "timeout_sec": 30 00:06:01.461 } 00:06:01.461 }, 00:06:01.461 { 00:06:01.461 "method": "bdev_nvme_set_options", 00:06:01.461 "params": { 00:06:01.461 "action_on_timeout": "none", 00:06:01.461 "timeout_us": 0, 00:06:01.461 "timeout_admin_us": 0, 00:06:01.461 "keep_alive_timeout_ms": 10000, 00:06:01.461 "arbitration_burst": 0, 00:06:01.461 "low_priority_weight": 0, 00:06:01.461 "medium_priority_weight": 0, 00:06:01.461 "high_priority_weight": 0, 00:06:01.461 "nvme_adminq_poll_period_us": 10000, 00:06:01.461 "nvme_ioq_poll_period_us": 0, 00:06:01.461 "io_queue_requests": 0, 00:06:01.461 "delay_cmd_submit": true, 00:06:01.461 "transport_retry_count": 4, 00:06:01.461 "bdev_retry_count": 3, 00:06:01.461 "transport_ack_timeout": 0, 00:06:01.461 "ctrlr_loss_timeout_sec": 0, 00:06:01.461 "reconnect_delay_sec": 0, 00:06:01.461 "fast_io_fail_timeout_sec": 0, 00:06:01.461 "disable_auto_failback": false, 00:06:01.461 "generate_uuids": false, 00:06:01.461 "transport_tos": 0, 00:06:01.461 "nvme_error_stat": false, 00:06:01.461 "rdma_srq_size": 0, 00:06:01.461 "io_path_stat": false, 00:06:01.461 "allow_accel_sequence": false, 00:06:01.461 "rdma_max_cq_size": 0, 00:06:01.461 "rdma_cm_event_timeout_ms": 0, 00:06:01.461 "dhchap_digests": [ 00:06:01.461 "sha256", 00:06:01.461 "sha384", 00:06:01.461 "sha512" 00:06:01.461 ], 00:06:01.461 "dhchap_dhgroups": [ 00:06:01.461 "null", 00:06:01.461 "ffdhe2048", 00:06:01.461 "ffdhe3072", 00:06:01.461 "ffdhe4096", 00:06:01.461 "ffdhe6144", 00:06:01.461 "ffdhe8192" 00:06:01.461 ] 00:06:01.461 } 00:06:01.461 }, 00:06:01.461 { 00:06:01.461 "method": "bdev_nvme_set_hotplug", 00:06:01.461 "params": { 00:06:01.461 "period_us": 100000, 00:06:01.461 "enable": false 00:06:01.461 } 00:06:01.461 }, 00:06:01.461 { 00:06:01.461 "method": "bdev_wait_for_examine" 00:06:01.461 } 00:06:01.461 ] 00:06:01.461 }, 00:06:01.461 { 00:06:01.461 "subsystem": "scsi", 00:06:01.461 "config": null 00:06:01.461 }, 00:06:01.461 { 00:06:01.461 "subsystem": "scheduler", 00:06:01.461 "config": [ 00:06:01.461 { 00:06:01.461 "method": "framework_set_scheduler", 00:06:01.461 "params": { 00:06:01.461 "name": "static" 00:06:01.462 } 00:06:01.462 } 00:06:01.462 ] 00:06:01.462 }, 00:06:01.462 { 00:06:01.462 "subsystem": "vhost_scsi", 00:06:01.462 "config": [] 00:06:01.462 }, 00:06:01.462 { 00:06:01.462 "subsystem": "vhost_blk", 00:06:01.462 "config": [] 00:06:01.462 }, 00:06:01.462 { 00:06:01.462 "subsystem": "ublk", 00:06:01.462 "config": [] 00:06:01.462 }, 00:06:01.462 { 00:06:01.462 "subsystem": "nbd", 00:06:01.462 "config": [] 00:06:01.462 }, 00:06:01.462 { 00:06:01.462 "subsystem": "nvmf", 00:06:01.462 "config": [ 00:06:01.462 { 00:06:01.462 "method": "nvmf_set_config", 00:06:01.462 "params": { 00:06:01.462 "discovery_filter": "match_any", 00:06:01.462 "admin_cmd_passthru": { 00:06:01.462 "identify_ctrlr": false 00:06:01.462 } 00:06:01.462 } 00:06:01.462 }, 00:06:01.462 { 00:06:01.462 "method": "nvmf_set_max_subsystems", 00:06:01.462 "params": { 00:06:01.462 "max_subsystems": 1024 00:06:01.462 } 00:06:01.462 }, 00:06:01.462 { 00:06:01.462 "method": "nvmf_set_crdt", 00:06:01.462 "params": { 00:06:01.462 "crdt1": 0, 00:06:01.462 "crdt2": 0, 00:06:01.462 "crdt3": 0 00:06:01.462 } 00:06:01.462 }, 00:06:01.462 { 00:06:01.462 "method": "nvmf_create_transport", 00:06:01.462 "params": { 00:06:01.462 "trtype": "TCP", 00:06:01.462 "max_queue_depth": 128, 00:06:01.462 "max_io_qpairs_per_ctrlr": 127, 00:06:01.462 "in_capsule_data_size": 4096, 00:06:01.462 "max_io_size": 131072, 00:06:01.462 "io_unit_size": 131072, 00:06:01.462 "max_aq_depth": 128, 00:06:01.462 "num_shared_buffers": 511, 00:06:01.462 "buf_cache_size": 4294967295, 00:06:01.462 "dif_insert_or_strip": false, 00:06:01.462 "zcopy": false, 00:06:01.462 "c2h_success": true, 00:06:01.462 "sock_priority": 0, 00:06:01.462 "abort_timeout_sec": 1, 00:06:01.462 "ack_timeout": 0, 00:06:01.462 "data_wr_pool_size": 0 00:06:01.462 } 00:06:01.462 } 00:06:01.462 ] 00:06:01.462 }, 00:06:01.462 { 00:06:01.462 "subsystem": "iscsi", 00:06:01.462 "config": [ 00:06:01.462 { 00:06:01.462 "method": "iscsi_set_options", 00:06:01.462 "params": { 00:06:01.462 "node_base": "iqn.2016-06.io.spdk", 00:06:01.462 "max_sessions": 128, 00:06:01.462 "max_connections_per_session": 2, 00:06:01.462 "max_queue_depth": 64, 00:06:01.462 "default_time2wait": 2, 00:06:01.462 "default_time2retain": 20, 00:06:01.462 "first_burst_length": 8192, 00:06:01.462 "immediate_data": true, 00:06:01.462 "allow_duplicated_isid": false, 00:06:01.462 "error_recovery_level": 0, 00:06:01.462 "nop_timeout": 60, 00:06:01.462 "nop_in_interval": 30, 00:06:01.462 "disable_chap": false, 00:06:01.462 "require_chap": false, 00:06:01.462 "mutual_chap": false, 00:06:01.462 "chap_group": 0, 00:06:01.462 "max_large_datain_per_connection": 64, 00:06:01.462 "max_r2t_per_connection": 4, 00:06:01.462 "pdu_pool_size": 36864, 00:06:01.462 "immediate_data_pool_size": 16384, 00:06:01.462 "data_out_pool_size": 2048 00:06:01.462 } 00:06:01.462 } 00:06:01.462 ] 00:06:01.462 } 00:06:01.462 ] 00:06:01.462 } 00:06:01.462 00:48:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:01.462 00:48:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1697945 00:06:01.462 00:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1697945 ']' 00:06:01.462 00:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1697945 00:06:01.462 00:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:01.462 00:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:01.462 00:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1697945 00:06:01.462 00:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:01.462 00:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:01.462 00:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1697945' 00:06:01.462 killing process with pid 1697945 00:06:01.462 00:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1697945 00:06:01.462 00:48:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1697945 00:06:02.032 00:48:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1698085 00:06:02.032 00:48:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:02.032 00:48:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:07.310 00:48:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1698085 00:06:07.310 00:48:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1698085 ']' 00:06:07.310 00:48:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1698085 00:06:07.310 00:48:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:07.310 00:48:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:07.310 00:48:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1698085 00:06:07.310 00:48:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:07.310 00:48:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:07.311 00:48:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1698085' 00:06:07.311 killing process with pid 1698085 00:06:07.311 00:48:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1698085 00:06:07.311 00:48:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1698085 00:06:07.311 00:48:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:07.311 00:48:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:07.311 00:06:07.311 real 0m6.489s 00:06:07.311 user 0m6.066s 00:06:07.311 sys 0m0.698s 00:06:07.311 00:48:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.311 00:48:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:07.311 ************************************ 00:06:07.311 END TEST skip_rpc_with_json 00:06:07.311 ************************************ 00:06:07.311 00:48:37 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:07.311 00:48:37 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.311 00:48:37 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.311 00:48:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.311 ************************************ 00:06:07.311 START TEST skip_rpc_with_delay 00:06:07.311 ************************************ 00:06:07.311 00:48:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:07.311 00:48:37 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:07.311 00:48:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:07.311 00:48:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:07.311 00:48:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:07.311 00:48:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.311 00:48:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:07.311 00:48:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.311 00:48:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:07.311 00:48:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.311 00:48:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:07.311 00:48:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:07.311 00:48:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:07.311 [2024-07-26 00:48:37.705383] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:07.311 [2024-07-26 00:48:37.705505] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:07.311 00:48:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:07.311 00:48:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:07.311 00:48:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:07.311 00:48:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:07.311 00:06:07.311 real 0m0.063s 00:06:07.311 user 0m0.039s 00:06:07.311 sys 0m0.023s 00:06:07.311 00:48:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.311 00:48:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:07.311 ************************************ 00:06:07.311 END TEST skip_rpc_with_delay 00:06:07.311 ************************************ 00:06:07.571 00:48:37 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:07.571 00:48:37 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:07.571 00:48:37 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:07.571 00:48:37 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.571 00:48:37 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.571 00:48:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.571 ************************************ 00:06:07.571 START TEST exit_on_failed_rpc_init 00:06:07.571 ************************************ 00:06:07.571 00:48:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:07.571 00:48:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1698793 00:06:07.571 00:48:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.571 00:48:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1698793 00:06:07.571 00:48:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 1698793 ']' 00:06:07.571 00:48:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.571 00:48:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.571 00:48:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.571 00:48:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.571 00:48:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:07.571 [2024-07-26 00:48:37.818134] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:06:07.571 [2024-07-26 00:48:37.818240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1698793 ] 00:06:07.571 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.571 [2024-07-26 00:48:37.874490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.571 [2024-07-26 00:48:37.961577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.830 00:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:07.830 00:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:07.830 00:48:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.831 00:48:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:07.831 00:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:07.831 00:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:07.831 00:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:07.831 00:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.831 00:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:07.831 00:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.831 00:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:07.831 00:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.831 00:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:07.831 00:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:07.831 00:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:08.089 [2024-07-26 00:48:38.266460] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:06:08.089 [2024-07-26 00:48:38.266558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1698811 ] 00:06:08.089 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.089 [2024-07-26 00:48:38.327652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.089 [2024-07-26 00:48:38.422257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.089 [2024-07-26 00:48:38.422395] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:08.089 [2024-07-26 00:48:38.422432] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:08.089 [2024-07-26 00:48:38.422446] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:08.089 00:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:08.089 00:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:08.089 00:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:08.089 00:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:08.089 00:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:08.089 00:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:08.089 00:48:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:08.089 00:48:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1698793 00:06:08.089 00:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 1698793 ']' 00:06:08.089 00:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 1698793 00:06:08.089 00:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:08.090 00:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:08.349 00:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1698793 00:06:08.349 00:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:08.349 00:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:08.349 00:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1698793' 00:06:08.349 killing process with pid 1698793 00:06:08.349 00:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 1698793 00:06:08.349 00:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 1698793 00:06:08.608 00:06:08.608 real 0m1.168s 00:06:08.608 user 0m1.276s 00:06:08.608 sys 0m0.454s 00:06:08.608 00:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.608 00:48:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:08.608 ************************************ 00:06:08.608 END TEST exit_on_failed_rpc_init 00:06:08.608 ************************************ 00:06:08.608 00:48:38 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:08.608 00:06:08.608 real 0m13.395s 00:06:08.608 user 0m12.587s 00:06:08.608 sys 0m1.665s 00:06:08.608 00:48:38 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.608 00:48:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.608 ************************************ 00:06:08.608 END TEST skip_rpc 00:06:08.608 ************************************ 00:06:08.608 00:48:38 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:08.608 00:48:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.608 00:48:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.608 00:48:38 -- common/autotest_common.sh@10 -- # set +x 00:06:08.608 ************************************ 00:06:08.608 START TEST rpc_client 00:06:08.608 ************************************ 00:06:08.608 00:48:39 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:08.868 * Looking for test storage... 00:06:08.868 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:08.868 00:48:39 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:08.868 OK 00:06:08.868 00:48:39 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:08.868 00:06:08.868 real 0m0.063s 00:06:08.868 user 0m0.023s 00:06:08.868 sys 0m0.044s 00:06:08.868 00:48:39 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.868 00:48:39 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:08.868 ************************************ 00:06:08.868 END TEST rpc_client 00:06:08.868 ************************************ 00:06:08.868 00:48:39 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:08.868 00:48:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.868 00:48:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.868 00:48:39 -- common/autotest_common.sh@10 -- # set +x 00:06:08.868 ************************************ 00:06:08.868 START TEST json_config 00:06:08.868 ************************************ 00:06:08.868 00:48:39 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:08.868 00:48:39 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:08.868 00:48:39 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:08.868 00:48:39 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:08.868 00:48:39 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:08.868 00:48:39 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:08.868 00:48:39 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:08.868 00:48:39 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:08.868 00:48:39 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:08.868 00:48:39 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:08.868 00:48:39 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:08.868 00:48:39 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:08.868 00:48:39 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:08.868 00:48:39 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:08.868 00:48:39 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:08.868 00:48:39 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:08.868 00:48:39 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:08.868 00:48:39 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:08.868 00:48:39 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:08.868 00:48:39 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:08.868 00:48:39 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:08.868 00:48:39 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:08.868 00:48:39 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:08.868 00:48:39 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.868 00:48:39 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.868 00:48:39 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.868 00:48:39 json_config -- paths/export.sh@5 -- # export PATH 00:06:08.868 00:48:39 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.868 00:48:39 json_config -- nvmf/common.sh@47 -- # : 0 00:06:08.868 00:48:39 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:08.868 00:48:39 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:08.868 00:48:39 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:08.868 00:48:39 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:08.868 00:48:39 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:08.868 00:48:39 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:08.868 00:48:39 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:08.868 00:48:39 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:08.868 00:48:39 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:08.868 00:48:39 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:08.868 00:48:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:08.868 00:48:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:08.868 00:48:39 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:08.868 00:48:39 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:08.868 00:48:39 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:08.868 00:48:39 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:08.868 00:48:39 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:08.868 00:48:39 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:08.868 00:48:39 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:08.868 00:48:39 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:08.868 00:48:39 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:08.868 00:48:39 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:08.868 00:48:39 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:08.868 00:48:39 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:06:08.868 INFO: JSON configuration test init 00:06:08.868 00:48:39 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:06:08.868 00:48:39 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:06:08.868 00:48:39 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:08.868 00:48:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.868 00:48:39 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:06:08.868 00:48:39 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:08.868 00:48:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.868 00:48:39 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:06:08.868 00:48:39 json_config -- json_config/common.sh@9 -- # local app=target 00:06:08.868 00:48:39 json_config -- json_config/common.sh@10 -- # shift 00:06:08.868 00:48:39 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:08.868 00:48:39 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:08.868 00:48:39 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:08.868 00:48:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:08.868 00:48:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:08.868 00:48:39 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1699052 00:06:08.868 00:48:39 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:08.868 Waiting for target to run... 00:06:08.868 00:48:39 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:08.868 00:48:39 json_config -- json_config/common.sh@25 -- # waitforlisten 1699052 /var/tmp/spdk_tgt.sock 00:06:08.868 00:48:39 json_config -- common/autotest_common.sh@831 -- # '[' -z 1699052 ']' 00:06:08.868 00:48:39 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:08.868 00:48:39 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.868 00:48:39 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:08.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:08.868 00:48:39 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.868 00:48:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.868 [2024-07-26 00:48:39.218536] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:06:08.868 [2024-07-26 00:48:39.218635] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1699052 ] 00:06:08.868 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.436 [2024-07-26 00:48:39.716384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.436 [2024-07-26 00:48:39.791205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.011 00:48:40 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.011 00:48:40 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:10.011 00:48:40 json_config -- json_config/common.sh@26 -- # echo '' 00:06:10.011 00:06:10.011 00:48:40 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:06:10.011 00:48:40 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:06:10.011 00:48:40 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:10.011 00:48:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.011 00:48:40 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:06:10.011 00:48:40 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:06:10.011 00:48:40 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:10.011 00:48:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.011 00:48:40 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:10.011 00:48:40 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:06:10.011 00:48:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:13.351 00:48:43 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:06:13.351 00:48:43 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:13.351 00:48:43 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:13.351 00:48:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.351 00:48:43 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:13.351 00:48:43 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:13.351 00:48:43 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:13.351 00:48:43 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:13.351 00:48:43 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:13.351 00:48:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:13.351 00:48:43 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:13.351 00:48:43 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:13.351 00:48:43 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:06:13.351 00:48:43 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:06:13.351 00:48:43 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:06:13.351 00:48:43 json_config -- json_config/json_config.sh@51 -- # sort 00:06:13.351 00:48:43 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:06:13.351 00:48:43 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:06:13.351 00:48:43 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:06:13.351 00:48:43 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:06:13.351 00:48:43 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:13.351 00:48:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.351 00:48:43 json_config -- json_config/json_config.sh@59 -- # return 0 00:06:13.351 00:48:43 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:13.351 00:48:43 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:13.351 00:48:43 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:06:13.351 00:48:43 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:06:13.351 00:48:43 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:06:13.351 00:48:43 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:06:13.351 00:48:43 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:13.351 00:48:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.351 00:48:43 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:13.351 00:48:43 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:06:13.351 00:48:43 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:06:13.351 00:48:43 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:13.351 00:48:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:13.610 MallocForNvmf0 00:06:13.610 00:48:43 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:13.610 00:48:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:13.871 MallocForNvmf1 00:06:13.871 00:48:44 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:13.871 00:48:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:14.131 [2024-07-26 00:48:44.397621] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:14.131 00:48:44 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:14.131 00:48:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:14.389 00:48:44 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:14.389 00:48:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:14.648 00:48:44 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:14.648 00:48:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:14.906 00:48:45 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:14.906 00:48:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:15.164 [2024-07-26 00:48:45.388852] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:15.164 00:48:45 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:06:15.164 00:48:45 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:15.164 00:48:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.164 00:48:45 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:06:15.164 00:48:45 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:15.164 00:48:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.164 00:48:45 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:06:15.164 00:48:45 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:15.164 00:48:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:15.423 MallocBdevForConfigChangeCheck 00:06:15.423 00:48:45 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:06:15.423 00:48:45 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:15.423 00:48:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.423 00:48:45 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:06:15.423 00:48:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:15.680 00:48:46 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:06:15.680 INFO: shutting down applications... 00:06:15.680 00:48:46 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:06:15.680 00:48:46 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:06:15.680 00:48:46 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:06:15.680 00:48:46 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:17.581 Calling clear_iscsi_subsystem 00:06:17.581 Calling clear_nvmf_subsystem 00:06:17.581 Calling clear_nbd_subsystem 00:06:17.581 Calling clear_ublk_subsystem 00:06:17.581 Calling clear_vhost_blk_subsystem 00:06:17.581 Calling clear_vhost_scsi_subsystem 00:06:17.581 Calling clear_bdev_subsystem 00:06:17.581 00:48:47 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:17.581 00:48:47 json_config -- json_config/json_config.sh@347 -- # count=100 00:06:17.581 00:48:47 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:06:17.581 00:48:47 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:17.581 00:48:47 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:17.581 00:48:47 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:17.859 00:48:48 json_config -- json_config/json_config.sh@349 -- # break 00:06:17.859 00:48:48 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:06:17.859 00:48:48 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:06:17.859 00:48:48 json_config -- json_config/common.sh@31 -- # local app=target 00:06:17.859 00:48:48 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:17.859 00:48:48 json_config -- json_config/common.sh@35 -- # [[ -n 1699052 ]] 00:06:17.859 00:48:48 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1699052 00:06:17.859 00:48:48 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:17.859 00:48:48 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:17.859 00:48:48 json_config -- json_config/common.sh@41 -- # kill -0 1699052 00:06:17.859 00:48:48 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:18.425 00:48:48 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:18.425 00:48:48 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:18.425 00:48:48 json_config -- json_config/common.sh@41 -- # kill -0 1699052 00:06:18.425 00:48:48 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:18.425 00:48:48 json_config -- json_config/common.sh@43 -- # break 00:06:18.425 00:48:48 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:18.425 00:48:48 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:18.425 SPDK target shutdown done 00:06:18.425 00:48:48 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:06:18.425 INFO: relaunching applications... 00:06:18.425 00:48:48 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:18.425 00:48:48 json_config -- json_config/common.sh@9 -- # local app=target 00:06:18.425 00:48:48 json_config -- json_config/common.sh@10 -- # shift 00:06:18.425 00:48:48 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:18.425 00:48:48 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:18.425 00:48:48 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:18.425 00:48:48 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:18.425 00:48:48 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:18.425 00:48:48 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1700367 00:06:18.425 00:48:48 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:18.425 00:48:48 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:18.425 Waiting for target to run... 00:06:18.425 00:48:48 json_config -- json_config/common.sh@25 -- # waitforlisten 1700367 /var/tmp/spdk_tgt.sock 00:06:18.425 00:48:48 json_config -- common/autotest_common.sh@831 -- # '[' -z 1700367 ']' 00:06:18.425 00:48:48 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:18.425 00:48:48 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:18.425 00:48:48 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:18.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:18.425 00:48:48 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:18.425 00:48:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.425 [2024-07-26 00:48:48.677958] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:06:18.425 [2024-07-26 00:48:48.678068] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1700367 ] 00:06:18.425 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.990 [2024-07-26 00:48:49.217506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.990 [2024-07-26 00:48:49.298687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.275 [2024-07-26 00:48:52.331053] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:22.275 [2024-07-26 00:48:52.363528] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:22.841 00:48:53 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.842 00:48:53 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:22.842 00:48:53 json_config -- json_config/common.sh@26 -- # echo '' 00:06:22.842 00:06:22.842 00:48:53 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:06:22.842 00:48:53 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:22.842 INFO: Checking if target configuration is the same... 00:06:22.842 00:48:53 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:22.842 00:48:53 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:06:22.842 00:48:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:22.842 + '[' 2 -ne 2 ']' 00:06:22.842 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:22.842 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:22.842 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:22.842 +++ basename /dev/fd/62 00:06:22.842 ++ mktemp /tmp/62.XXX 00:06:22.842 + tmp_file_1=/tmp/62.LHZ 00:06:22.842 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:22.842 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:22.842 + tmp_file_2=/tmp/spdk_tgt_config.json.SCj 00:06:22.842 + ret=0 00:06:22.842 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:23.100 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:23.100 + diff -u /tmp/62.LHZ /tmp/spdk_tgt_config.json.SCj 00:06:23.100 + echo 'INFO: JSON config files are the same' 00:06:23.100 INFO: JSON config files are the same 00:06:23.100 + rm /tmp/62.LHZ /tmp/spdk_tgt_config.json.SCj 00:06:23.100 + exit 0 00:06:23.100 00:48:53 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:06:23.100 00:48:53 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:23.100 INFO: changing configuration and checking if this can be detected... 00:06:23.100 00:48:53 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:23.100 00:48:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:23.357 00:48:53 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:23.357 00:48:53 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:06:23.357 00:48:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:23.357 + '[' 2 -ne 2 ']' 00:06:23.357 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:23.357 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:23.357 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:23.357 +++ basename /dev/fd/62 00:06:23.357 ++ mktemp /tmp/62.XXX 00:06:23.357 + tmp_file_1=/tmp/62.FTj 00:06:23.357 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:23.357 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:23.357 + tmp_file_2=/tmp/spdk_tgt_config.json.K1W 00:06:23.357 + ret=0 00:06:23.357 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:23.923 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:23.923 + diff -u /tmp/62.FTj /tmp/spdk_tgt_config.json.K1W 00:06:23.923 + ret=1 00:06:23.923 + echo '=== Start of file: /tmp/62.FTj ===' 00:06:23.923 + cat /tmp/62.FTj 00:06:23.923 + echo '=== End of file: /tmp/62.FTj ===' 00:06:23.923 + echo '' 00:06:23.923 + echo '=== Start of file: /tmp/spdk_tgt_config.json.K1W ===' 00:06:23.923 + cat /tmp/spdk_tgt_config.json.K1W 00:06:23.923 + echo '=== End of file: /tmp/spdk_tgt_config.json.K1W ===' 00:06:23.923 + echo '' 00:06:23.923 + rm /tmp/62.FTj /tmp/spdk_tgt_config.json.K1W 00:06:23.923 + exit 1 00:06:23.923 00:48:54 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:06:23.923 INFO: configuration change detected. 00:06:23.923 00:48:54 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:06:23.923 00:48:54 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:06:23.923 00:48:54 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:23.923 00:48:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.923 00:48:54 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:06:23.923 00:48:54 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:06:23.923 00:48:54 json_config -- json_config/json_config.sh@321 -- # [[ -n 1700367 ]] 00:06:23.923 00:48:54 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:06:23.923 00:48:54 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:06:23.923 00:48:54 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:23.923 00:48:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.923 00:48:54 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:06:23.923 00:48:54 json_config -- json_config/json_config.sh@197 -- # uname -s 00:06:23.923 00:48:54 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:06:23.923 00:48:54 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:06:23.923 00:48:54 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:06:23.923 00:48:54 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:06:23.923 00:48:54 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:23.923 00:48:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.923 00:48:54 json_config -- json_config/json_config.sh@327 -- # killprocess 1700367 00:06:23.923 00:48:54 json_config -- common/autotest_common.sh@950 -- # '[' -z 1700367 ']' 00:06:23.923 00:48:54 json_config -- common/autotest_common.sh@954 -- # kill -0 1700367 00:06:23.923 00:48:54 json_config -- common/autotest_common.sh@955 -- # uname 00:06:23.923 00:48:54 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:23.923 00:48:54 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1700367 00:06:23.923 00:48:54 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:23.923 00:48:54 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:23.923 00:48:54 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1700367' 00:06:23.923 killing process with pid 1700367 00:06:23.923 00:48:54 json_config -- common/autotest_common.sh@969 -- # kill 1700367 00:06:23.923 00:48:54 json_config -- common/autotest_common.sh@974 -- # wait 1700367 00:06:25.833 00:48:55 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:25.833 00:48:55 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:06:25.833 00:48:55 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:25.833 00:48:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.833 00:48:55 json_config -- json_config/json_config.sh@332 -- # return 0 00:06:25.833 00:48:55 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:06:25.833 INFO: Success 00:06:25.833 00:06:25.833 real 0m16.781s 00:06:25.833 user 0m18.550s 00:06:25.833 sys 0m2.225s 00:06:25.833 00:48:55 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.833 00:48:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.833 ************************************ 00:06:25.833 END TEST json_config 00:06:25.833 ************************************ 00:06:25.833 00:48:55 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:25.833 00:48:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.833 00:48:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.833 00:48:55 -- common/autotest_common.sh@10 -- # set +x 00:06:25.833 ************************************ 00:06:25.833 START TEST json_config_extra_key 00:06:25.833 ************************************ 00:06:25.833 00:48:55 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:25.833 00:48:55 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:25.833 00:48:55 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:25.833 00:48:55 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:25.833 00:48:55 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:25.833 00:48:55 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:25.833 00:48:55 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:25.833 00:48:55 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:25.833 00:48:55 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:25.833 00:48:55 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:25.833 00:48:55 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:25.833 00:48:55 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:25.833 00:48:55 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:25.833 00:48:55 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:25.833 00:48:55 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:25.833 00:48:55 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:25.833 00:48:55 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:25.833 00:48:55 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:25.833 00:48:55 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:25.833 00:48:55 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:25.833 00:48:55 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.833 00:48:55 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.833 00:48:55 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.833 00:48:55 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.833 00:48:55 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.833 00:48:55 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.833 00:48:55 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:25.833 00:48:55 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.833 00:48:55 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:25.833 00:48:55 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:25.833 00:48:55 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:25.833 00:48:55 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:25.833 00:48:55 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:25.833 00:48:55 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:25.833 00:48:55 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:25.833 00:48:55 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:25.833 00:48:55 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:25.833 00:48:55 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:25.833 00:48:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:25.833 00:48:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:25.833 00:48:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:25.833 00:48:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:25.833 00:48:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:25.833 00:48:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:25.833 00:48:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:25.833 00:48:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:25.833 00:48:55 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:25.833 00:48:55 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:25.833 INFO: launching applications... 00:06:25.833 00:48:55 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:25.833 00:48:55 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:25.833 00:48:55 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:25.833 00:48:55 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:25.833 00:48:55 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:25.833 00:48:55 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:25.833 00:48:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:25.833 00:48:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:25.833 00:48:55 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1701284 00:06:25.833 00:48:55 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:25.833 00:48:55 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:25.834 Waiting for target to run... 00:06:25.834 00:48:55 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1701284 /var/tmp/spdk_tgt.sock 00:06:25.834 00:48:55 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 1701284 ']' 00:06:25.834 00:48:55 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:25.834 00:48:55 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:25.834 00:48:55 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:25.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:25.834 00:48:55 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:25.834 00:48:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:25.834 [2024-07-26 00:48:56.046109] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:06:25.834 [2024-07-26 00:48:56.046202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1701284 ] 00:06:25.834 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.401 [2024-07-26 00:48:56.535013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.401 [2024-07-26 00:48:56.617168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.660 00:48:56 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:26.660 00:48:56 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:26.660 00:48:56 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:26.660 00:06:26.660 00:48:56 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:26.660 INFO: shutting down applications... 00:06:26.660 00:48:56 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:26.660 00:48:56 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:26.660 00:48:56 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:26.660 00:48:56 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1701284 ]] 00:06:26.660 00:48:56 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1701284 00:06:26.660 00:48:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:26.660 00:48:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:26.660 00:48:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1701284 00:06:26.661 00:48:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:27.227 00:48:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:27.227 00:48:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:27.227 00:48:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1701284 00:06:27.227 00:48:57 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:27.227 00:48:57 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:27.227 00:48:57 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:27.227 00:48:57 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:27.227 SPDK target shutdown done 00:06:27.227 00:48:57 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:27.227 Success 00:06:27.227 00:06:27.227 real 0m1.548s 00:06:27.227 user 0m1.343s 00:06:27.227 sys 0m0.588s 00:06:27.227 00:48:57 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.227 00:48:57 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:27.227 ************************************ 00:06:27.227 END TEST json_config_extra_key 00:06:27.227 ************************************ 00:06:27.227 00:48:57 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:27.227 00:48:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:27.227 00:48:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.227 00:48:57 -- common/autotest_common.sh@10 -- # set +x 00:06:27.227 ************************************ 00:06:27.227 START TEST alias_rpc 00:06:27.227 ************************************ 00:06:27.227 00:48:57 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:27.227 * Looking for test storage... 00:06:27.227 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:27.227 00:48:57 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:27.227 00:48:57 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1701545 00:06:27.228 00:48:57 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:27.228 00:48:57 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1701545 00:06:27.228 00:48:57 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 1701545 ']' 00:06:27.228 00:48:57 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.228 00:48:57 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:27.228 00:48:57 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.228 00:48:57 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:27.228 00:48:57 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.228 [2024-07-26 00:48:57.638240] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:06:27.228 [2024-07-26 00:48:57.638332] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1701545 ] 00:06:27.487 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.487 [2024-07-26 00:48:57.699910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.487 [2024-07-26 00:48:57.790804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.747 00:48:58 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:27.747 00:48:58 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:27.747 00:48:58 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:28.006 00:48:58 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1701545 00:06:28.006 00:48:58 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 1701545 ']' 00:06:28.006 00:48:58 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 1701545 00:06:28.007 00:48:58 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:28.007 00:48:58 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:28.007 00:48:58 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1701545 00:06:28.007 00:48:58 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:28.007 00:48:58 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:28.007 00:48:58 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1701545' 00:06:28.007 killing process with pid 1701545 00:06:28.007 00:48:58 alias_rpc -- common/autotest_common.sh@969 -- # kill 1701545 00:06:28.007 00:48:58 alias_rpc -- common/autotest_common.sh@974 -- # wait 1701545 00:06:28.574 00:06:28.574 real 0m1.210s 00:06:28.574 user 0m1.282s 00:06:28.574 sys 0m0.421s 00:06:28.574 00:48:58 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.574 00:48:58 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.574 ************************************ 00:06:28.574 END TEST alias_rpc 00:06:28.574 ************************************ 00:06:28.574 00:48:58 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:28.574 00:48:58 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:28.574 00:48:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:28.574 00:48:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.574 00:48:58 -- common/autotest_common.sh@10 -- # set +x 00:06:28.574 ************************************ 00:06:28.574 START TEST spdkcli_tcp 00:06:28.574 ************************************ 00:06:28.574 00:48:58 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:28.574 * Looking for test storage... 00:06:28.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:28.574 00:48:58 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:28.574 00:48:58 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:28.574 00:48:58 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:28.574 00:48:58 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:28.574 00:48:58 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:28.574 00:48:58 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:28.574 00:48:58 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:28.574 00:48:58 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:28.574 00:48:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:28.574 00:48:58 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1701782 00:06:28.574 00:48:58 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:28.574 00:48:58 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1701782 00:06:28.574 00:48:58 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 1701782 ']' 00:06:28.574 00:48:58 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.574 00:48:58 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.574 00:48:58 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.574 00:48:58 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.574 00:48:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:28.574 [2024-07-26 00:48:58.909133] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:06:28.574 [2024-07-26 00:48:58.909227] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1701782 ] 00:06:28.574 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.574 [2024-07-26 00:48:58.965498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.833 [2024-07-26 00:48:59.051197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.833 [2024-07-26 00:48:59.051202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.092 00:48:59 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:29.092 00:48:59 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:29.092 00:48:59 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1701788 00:06:29.092 00:48:59 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:29.092 00:48:59 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:29.350 [ 00:06:29.350 "bdev_malloc_delete", 00:06:29.350 "bdev_malloc_create", 00:06:29.350 "bdev_null_resize", 00:06:29.350 "bdev_null_delete", 00:06:29.350 "bdev_null_create", 00:06:29.350 "bdev_nvme_cuse_unregister", 00:06:29.350 "bdev_nvme_cuse_register", 00:06:29.350 "bdev_opal_new_user", 00:06:29.350 "bdev_opal_set_lock_state", 00:06:29.350 "bdev_opal_delete", 00:06:29.350 "bdev_opal_get_info", 00:06:29.350 "bdev_opal_create", 00:06:29.350 "bdev_nvme_opal_revert", 00:06:29.350 "bdev_nvme_opal_init", 00:06:29.350 "bdev_nvme_send_cmd", 00:06:29.350 "bdev_nvme_get_path_iostat", 00:06:29.350 "bdev_nvme_get_mdns_discovery_info", 00:06:29.350 "bdev_nvme_stop_mdns_discovery", 00:06:29.350 "bdev_nvme_start_mdns_discovery", 00:06:29.350 "bdev_nvme_set_multipath_policy", 00:06:29.350 "bdev_nvme_set_preferred_path", 00:06:29.350 "bdev_nvme_get_io_paths", 00:06:29.350 "bdev_nvme_remove_error_injection", 00:06:29.350 "bdev_nvme_add_error_injection", 00:06:29.350 "bdev_nvme_get_discovery_info", 00:06:29.350 "bdev_nvme_stop_discovery", 00:06:29.350 "bdev_nvme_start_discovery", 00:06:29.350 "bdev_nvme_get_controller_health_info", 00:06:29.350 "bdev_nvme_disable_controller", 00:06:29.350 "bdev_nvme_enable_controller", 00:06:29.350 "bdev_nvme_reset_controller", 00:06:29.350 "bdev_nvme_get_transport_statistics", 00:06:29.350 "bdev_nvme_apply_firmware", 00:06:29.350 "bdev_nvme_detach_controller", 00:06:29.350 "bdev_nvme_get_controllers", 00:06:29.350 "bdev_nvme_attach_controller", 00:06:29.350 "bdev_nvme_set_hotplug", 00:06:29.350 "bdev_nvme_set_options", 00:06:29.350 "bdev_passthru_delete", 00:06:29.350 "bdev_passthru_create", 00:06:29.350 "bdev_lvol_set_parent_bdev", 00:06:29.350 "bdev_lvol_set_parent", 00:06:29.350 "bdev_lvol_check_shallow_copy", 00:06:29.350 "bdev_lvol_start_shallow_copy", 00:06:29.350 "bdev_lvol_grow_lvstore", 00:06:29.350 "bdev_lvol_get_lvols", 00:06:29.350 "bdev_lvol_get_lvstores", 00:06:29.350 "bdev_lvol_delete", 00:06:29.350 "bdev_lvol_set_read_only", 00:06:29.350 "bdev_lvol_resize", 00:06:29.350 "bdev_lvol_decouple_parent", 00:06:29.350 "bdev_lvol_inflate", 00:06:29.350 "bdev_lvol_rename", 00:06:29.350 "bdev_lvol_clone_bdev", 00:06:29.350 "bdev_lvol_clone", 00:06:29.350 "bdev_lvol_snapshot", 00:06:29.350 "bdev_lvol_create", 00:06:29.350 "bdev_lvol_delete_lvstore", 00:06:29.350 "bdev_lvol_rename_lvstore", 00:06:29.350 "bdev_lvol_create_lvstore", 00:06:29.350 "bdev_raid_set_options", 00:06:29.350 "bdev_raid_remove_base_bdev", 00:06:29.350 "bdev_raid_add_base_bdev", 00:06:29.350 "bdev_raid_delete", 00:06:29.351 "bdev_raid_create", 00:06:29.351 "bdev_raid_get_bdevs", 00:06:29.351 "bdev_error_inject_error", 00:06:29.351 "bdev_error_delete", 00:06:29.351 "bdev_error_create", 00:06:29.351 "bdev_split_delete", 00:06:29.351 "bdev_split_create", 00:06:29.351 "bdev_delay_delete", 00:06:29.351 "bdev_delay_create", 00:06:29.351 "bdev_delay_update_latency", 00:06:29.351 "bdev_zone_block_delete", 00:06:29.351 "bdev_zone_block_create", 00:06:29.351 "blobfs_create", 00:06:29.351 "blobfs_detect", 00:06:29.351 "blobfs_set_cache_size", 00:06:29.351 "bdev_aio_delete", 00:06:29.351 "bdev_aio_rescan", 00:06:29.351 "bdev_aio_create", 00:06:29.351 "bdev_ftl_set_property", 00:06:29.351 "bdev_ftl_get_properties", 00:06:29.351 "bdev_ftl_get_stats", 00:06:29.351 "bdev_ftl_unmap", 00:06:29.351 "bdev_ftl_unload", 00:06:29.351 "bdev_ftl_delete", 00:06:29.351 "bdev_ftl_load", 00:06:29.351 "bdev_ftl_create", 00:06:29.351 "bdev_virtio_attach_controller", 00:06:29.351 "bdev_virtio_scsi_get_devices", 00:06:29.351 "bdev_virtio_detach_controller", 00:06:29.351 "bdev_virtio_blk_set_hotplug", 00:06:29.351 "bdev_iscsi_delete", 00:06:29.351 "bdev_iscsi_create", 00:06:29.351 "bdev_iscsi_set_options", 00:06:29.351 "accel_error_inject_error", 00:06:29.351 "ioat_scan_accel_module", 00:06:29.351 "dsa_scan_accel_module", 00:06:29.351 "iaa_scan_accel_module", 00:06:29.351 "vfu_virtio_create_scsi_endpoint", 00:06:29.351 "vfu_virtio_scsi_remove_target", 00:06:29.351 "vfu_virtio_scsi_add_target", 00:06:29.351 "vfu_virtio_create_blk_endpoint", 00:06:29.351 "vfu_virtio_delete_endpoint", 00:06:29.351 "keyring_file_remove_key", 00:06:29.351 "keyring_file_add_key", 00:06:29.351 "keyring_linux_set_options", 00:06:29.351 "iscsi_get_histogram", 00:06:29.351 "iscsi_enable_histogram", 00:06:29.351 "iscsi_set_options", 00:06:29.351 "iscsi_get_auth_groups", 00:06:29.351 "iscsi_auth_group_remove_secret", 00:06:29.351 "iscsi_auth_group_add_secret", 00:06:29.351 "iscsi_delete_auth_group", 00:06:29.351 "iscsi_create_auth_group", 00:06:29.351 "iscsi_set_discovery_auth", 00:06:29.351 "iscsi_get_options", 00:06:29.351 "iscsi_target_node_request_logout", 00:06:29.351 "iscsi_target_node_set_redirect", 00:06:29.351 "iscsi_target_node_set_auth", 00:06:29.351 "iscsi_target_node_add_lun", 00:06:29.351 "iscsi_get_stats", 00:06:29.351 "iscsi_get_connections", 00:06:29.351 "iscsi_portal_group_set_auth", 00:06:29.351 "iscsi_start_portal_group", 00:06:29.351 "iscsi_delete_portal_group", 00:06:29.351 "iscsi_create_portal_group", 00:06:29.351 "iscsi_get_portal_groups", 00:06:29.351 "iscsi_delete_target_node", 00:06:29.351 "iscsi_target_node_remove_pg_ig_maps", 00:06:29.351 "iscsi_target_node_add_pg_ig_maps", 00:06:29.351 "iscsi_create_target_node", 00:06:29.351 "iscsi_get_target_nodes", 00:06:29.351 "iscsi_delete_initiator_group", 00:06:29.351 "iscsi_initiator_group_remove_initiators", 00:06:29.351 "iscsi_initiator_group_add_initiators", 00:06:29.351 "iscsi_create_initiator_group", 00:06:29.351 "iscsi_get_initiator_groups", 00:06:29.351 "nvmf_set_crdt", 00:06:29.351 "nvmf_set_config", 00:06:29.351 "nvmf_set_max_subsystems", 00:06:29.351 "nvmf_stop_mdns_prr", 00:06:29.351 "nvmf_publish_mdns_prr", 00:06:29.351 "nvmf_subsystem_get_listeners", 00:06:29.351 "nvmf_subsystem_get_qpairs", 00:06:29.351 "nvmf_subsystem_get_controllers", 00:06:29.351 "nvmf_get_stats", 00:06:29.351 "nvmf_get_transports", 00:06:29.351 "nvmf_create_transport", 00:06:29.351 "nvmf_get_targets", 00:06:29.351 "nvmf_delete_target", 00:06:29.351 "nvmf_create_target", 00:06:29.351 "nvmf_subsystem_allow_any_host", 00:06:29.351 "nvmf_subsystem_remove_host", 00:06:29.351 "nvmf_subsystem_add_host", 00:06:29.351 "nvmf_ns_remove_host", 00:06:29.351 "nvmf_ns_add_host", 00:06:29.351 "nvmf_subsystem_remove_ns", 00:06:29.351 "nvmf_subsystem_add_ns", 00:06:29.351 "nvmf_subsystem_listener_set_ana_state", 00:06:29.351 "nvmf_discovery_get_referrals", 00:06:29.351 "nvmf_discovery_remove_referral", 00:06:29.351 "nvmf_discovery_add_referral", 00:06:29.351 "nvmf_subsystem_remove_listener", 00:06:29.351 "nvmf_subsystem_add_listener", 00:06:29.351 "nvmf_delete_subsystem", 00:06:29.351 "nvmf_create_subsystem", 00:06:29.351 "nvmf_get_subsystems", 00:06:29.351 "env_dpdk_get_mem_stats", 00:06:29.351 "nbd_get_disks", 00:06:29.351 "nbd_stop_disk", 00:06:29.351 "nbd_start_disk", 00:06:29.351 "ublk_recover_disk", 00:06:29.351 "ublk_get_disks", 00:06:29.351 "ublk_stop_disk", 00:06:29.351 "ublk_start_disk", 00:06:29.351 "ublk_destroy_target", 00:06:29.351 "ublk_create_target", 00:06:29.351 "virtio_blk_create_transport", 00:06:29.351 "virtio_blk_get_transports", 00:06:29.351 "vhost_controller_set_coalescing", 00:06:29.351 "vhost_get_controllers", 00:06:29.351 "vhost_delete_controller", 00:06:29.351 "vhost_create_blk_controller", 00:06:29.351 "vhost_scsi_controller_remove_target", 00:06:29.351 "vhost_scsi_controller_add_target", 00:06:29.351 "vhost_start_scsi_controller", 00:06:29.351 "vhost_create_scsi_controller", 00:06:29.351 "thread_set_cpumask", 00:06:29.351 "framework_get_governor", 00:06:29.351 "framework_get_scheduler", 00:06:29.351 "framework_set_scheduler", 00:06:29.351 "framework_get_reactors", 00:06:29.351 "thread_get_io_channels", 00:06:29.351 "thread_get_pollers", 00:06:29.351 "thread_get_stats", 00:06:29.351 "framework_monitor_context_switch", 00:06:29.351 "spdk_kill_instance", 00:06:29.351 "log_enable_timestamps", 00:06:29.351 "log_get_flags", 00:06:29.351 "log_clear_flag", 00:06:29.351 "log_set_flag", 00:06:29.351 "log_get_level", 00:06:29.351 "log_set_level", 00:06:29.351 "log_get_print_level", 00:06:29.351 "log_set_print_level", 00:06:29.351 "framework_enable_cpumask_locks", 00:06:29.351 "framework_disable_cpumask_locks", 00:06:29.351 "framework_wait_init", 00:06:29.351 "framework_start_init", 00:06:29.351 "scsi_get_devices", 00:06:29.351 "bdev_get_histogram", 00:06:29.351 "bdev_enable_histogram", 00:06:29.351 "bdev_set_qos_limit", 00:06:29.351 "bdev_set_qd_sampling_period", 00:06:29.351 "bdev_get_bdevs", 00:06:29.351 "bdev_reset_iostat", 00:06:29.351 "bdev_get_iostat", 00:06:29.351 "bdev_examine", 00:06:29.351 "bdev_wait_for_examine", 00:06:29.351 "bdev_set_options", 00:06:29.351 "notify_get_notifications", 00:06:29.351 "notify_get_types", 00:06:29.351 "accel_get_stats", 00:06:29.351 "accel_set_options", 00:06:29.351 "accel_set_driver", 00:06:29.351 "accel_crypto_key_destroy", 00:06:29.351 "accel_crypto_keys_get", 00:06:29.351 "accel_crypto_key_create", 00:06:29.351 "accel_assign_opc", 00:06:29.351 "accel_get_module_info", 00:06:29.351 "accel_get_opc_assignments", 00:06:29.351 "vmd_rescan", 00:06:29.351 "vmd_remove_device", 00:06:29.351 "vmd_enable", 00:06:29.351 "sock_get_default_impl", 00:06:29.351 "sock_set_default_impl", 00:06:29.351 "sock_impl_set_options", 00:06:29.351 "sock_impl_get_options", 00:06:29.351 "iobuf_get_stats", 00:06:29.351 "iobuf_set_options", 00:06:29.351 "keyring_get_keys", 00:06:29.351 "framework_get_pci_devices", 00:06:29.351 "framework_get_config", 00:06:29.351 "framework_get_subsystems", 00:06:29.351 "vfu_tgt_set_base_path", 00:06:29.351 "trace_get_info", 00:06:29.351 "trace_get_tpoint_group_mask", 00:06:29.351 "trace_disable_tpoint_group", 00:06:29.351 "trace_enable_tpoint_group", 00:06:29.351 "trace_clear_tpoint_mask", 00:06:29.351 "trace_set_tpoint_mask", 00:06:29.351 "spdk_get_version", 00:06:29.351 "rpc_get_methods" 00:06:29.351 ] 00:06:29.351 00:48:59 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:29.351 00:48:59 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:29.351 00:48:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:29.351 00:48:59 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:29.351 00:48:59 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1701782 00:06:29.351 00:48:59 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 1701782 ']' 00:06:29.351 00:48:59 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 1701782 00:06:29.351 00:48:59 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:29.351 00:48:59 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:29.351 00:48:59 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1701782 00:06:29.351 00:48:59 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:29.351 00:48:59 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:29.351 00:48:59 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1701782' 00:06:29.351 killing process with pid 1701782 00:06:29.351 00:48:59 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 1701782 00:06:29.351 00:48:59 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 1701782 00:06:29.625 00:06:29.625 real 0m1.196s 00:06:29.625 user 0m2.098s 00:06:29.625 sys 0m0.452s 00:06:29.625 00:48:59 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.625 00:48:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:29.625 ************************************ 00:06:29.625 END TEST spdkcli_tcp 00:06:29.625 ************************************ 00:06:29.625 00:49:00 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:29.625 00:49:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:29.625 00:49:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.625 00:49:00 -- common/autotest_common.sh@10 -- # set +x 00:06:29.902 ************************************ 00:06:29.902 START TEST dpdk_mem_utility 00:06:29.902 ************************************ 00:06:29.902 00:49:00 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:29.902 * Looking for test storage... 00:06:29.902 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:29.902 00:49:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:29.902 00:49:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1701985 00:06:29.902 00:49:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:29.902 00:49:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1701985 00:06:29.902 00:49:00 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 1701985 ']' 00:06:29.902 00:49:00 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.903 00:49:00 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:29.903 00:49:00 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.903 00:49:00 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:29.903 00:49:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:29.903 [2024-07-26 00:49:00.140942] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:06:29.903 [2024-07-26 00:49:00.141027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1701985 ] 00:06:29.903 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.903 [2024-07-26 00:49:00.199814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.903 [2024-07-26 00:49:00.283822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.159 00:49:00 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:30.159 00:49:00 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:30.159 00:49:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:30.159 00:49:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:30.159 00:49:00 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.159 00:49:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:30.159 { 00:06:30.159 "filename": "/tmp/spdk_mem_dump.txt" 00:06:30.159 } 00:06:30.159 00:49:00 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.159 00:49:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:30.416 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:30.416 1 heaps totaling size 814.000000 MiB 00:06:30.416 size: 814.000000 MiB heap id: 0 00:06:30.416 end heaps---------- 00:06:30.416 8 mempools totaling size 598.116089 MiB 00:06:30.416 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:30.416 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:30.416 size: 84.521057 MiB name: bdev_io_1701985 00:06:30.416 size: 51.011292 MiB name: evtpool_1701985 00:06:30.416 size: 50.003479 MiB name: msgpool_1701985 00:06:30.416 size: 21.763794 MiB name: PDU_Pool 00:06:30.416 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:30.416 size: 0.026123 MiB name: Session_Pool 00:06:30.416 end mempools------- 00:06:30.416 6 memzones totaling size 4.142822 MiB 00:06:30.416 size: 1.000366 MiB name: RG_ring_0_1701985 00:06:30.416 size: 1.000366 MiB name: RG_ring_1_1701985 00:06:30.416 size: 1.000366 MiB name: RG_ring_4_1701985 00:06:30.416 size: 1.000366 MiB name: RG_ring_5_1701985 00:06:30.416 size: 0.125366 MiB name: RG_ring_2_1701985 00:06:30.416 size: 0.015991 MiB name: RG_ring_3_1701985 00:06:30.416 end memzones------- 00:06:30.416 00:49:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:30.416 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:30.416 list of free elements. size: 12.519348 MiB 00:06:30.416 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:30.416 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:30.416 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:30.416 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:30.416 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:30.416 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:30.416 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:30.416 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:30.416 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:30.416 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:30.416 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:30.416 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:30.416 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:30.416 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:30.416 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:30.416 list of standard malloc elements. size: 199.218079 MiB 00:06:30.416 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:30.416 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:30.416 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:30.416 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:30.416 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:30.417 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:30.417 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:30.417 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:30.417 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:30.417 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:30.417 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:30.417 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:30.417 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:30.417 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:30.417 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:30.417 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:30.417 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:30.417 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:30.417 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:30.417 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:30.417 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:30.417 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:30.417 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:30.417 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:30.417 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:30.417 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:30.417 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:30.417 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:30.417 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:30.417 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:30.417 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:30.417 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:30.417 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:30.417 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:30.417 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:30.417 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:30.417 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:30.417 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:30.417 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:30.417 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:30.417 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:30.417 list of memzone associated elements. size: 602.262573 MiB 00:06:30.417 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:30.417 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:30.417 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:30.417 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:30.417 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:30.417 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1701985_0 00:06:30.417 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:30.417 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1701985_0 00:06:30.417 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:30.417 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1701985_0 00:06:30.417 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:30.417 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:30.417 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:30.417 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:30.417 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:30.417 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1701985 00:06:30.417 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:30.417 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1701985 00:06:30.417 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:30.417 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1701985 00:06:30.417 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:30.417 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:30.417 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:30.417 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:30.417 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:30.417 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:30.417 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:30.417 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:30.417 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:30.417 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1701985 00:06:30.417 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:30.417 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1701985 00:06:30.417 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:30.417 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1701985 00:06:30.417 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:30.417 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1701985 00:06:30.417 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:30.417 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1701985 00:06:30.417 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:30.417 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:30.417 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:30.417 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:30.417 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:30.417 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:30.417 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:30.417 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1701985 00:06:30.417 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:30.417 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:30.417 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:30.417 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:30.417 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:30.417 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1701985 00:06:30.417 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:30.417 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:30.417 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:30.417 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1701985 00:06:30.417 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:30.417 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1701985 00:06:30.417 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:30.417 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:30.417 00:49:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:30.417 00:49:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1701985 00:06:30.417 00:49:00 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 1701985 ']' 00:06:30.417 00:49:00 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 1701985 00:06:30.417 00:49:00 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:30.417 00:49:00 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:30.417 00:49:00 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1701985 00:06:30.417 00:49:00 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:30.417 00:49:00 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:30.417 00:49:00 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1701985' 00:06:30.417 killing process with pid 1701985 00:06:30.417 00:49:00 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 1701985 00:06:30.417 00:49:00 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 1701985 00:06:30.675 00:06:30.675 real 0m1.044s 00:06:30.675 user 0m1.001s 00:06:30.675 sys 0m0.412s 00:06:30.675 00:49:01 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.675 00:49:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:30.675 ************************************ 00:06:30.675 END TEST dpdk_mem_utility 00:06:30.675 ************************************ 00:06:30.933 00:49:01 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:30.933 00:49:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:30.933 00:49:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.933 00:49:01 -- common/autotest_common.sh@10 -- # set +x 00:06:30.933 ************************************ 00:06:30.933 START TEST event 00:06:30.933 ************************************ 00:06:30.933 00:49:01 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:30.933 * Looking for test storage... 00:06:30.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:30.933 00:49:01 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:30.933 00:49:01 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:30.933 00:49:01 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:30.933 00:49:01 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:30.933 00:49:01 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.933 00:49:01 event -- common/autotest_common.sh@10 -- # set +x 00:06:30.933 ************************************ 00:06:30.933 START TEST event_perf 00:06:30.933 ************************************ 00:06:30.933 00:49:01 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:30.933 Running I/O for 1 seconds...[2024-07-26 00:49:01.223893] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:06:30.933 [2024-07-26 00:49:01.223956] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1702174 ] 00:06:30.933 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.933 [2024-07-26 00:49:01.287260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:31.191 [2024-07-26 00:49:01.387067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.191 [2024-07-26 00:49:01.387150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.191 [2024-07-26 00:49:01.387147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.191 [2024-07-26 00:49:01.387124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.124 Running I/O for 1 seconds... 00:06:32.124 lcore 0: 228670 00:06:32.124 lcore 1: 228668 00:06:32.124 lcore 2: 228668 00:06:32.124 lcore 3: 228669 00:06:32.124 done. 00:06:32.124 00:06:32.124 real 0m1.260s 00:06:32.124 user 0m4.168s 00:06:32.124 sys 0m0.087s 00:06:32.124 00:49:02 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.124 00:49:02 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:32.124 ************************************ 00:06:32.124 END TEST event_perf 00:06:32.124 ************************************ 00:06:32.124 00:49:02 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:32.124 00:49:02 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:32.124 00:49:02 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.124 00:49:02 event -- common/autotest_common.sh@10 -- # set +x 00:06:32.124 ************************************ 00:06:32.124 START TEST event_reactor 00:06:32.124 ************************************ 00:06:32.124 00:49:02 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:32.124 [2024-07-26 00:49:02.534538] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:06:32.124 [2024-07-26 00:49:02.534606] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1702337 ] 00:06:32.383 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.383 [2024-07-26 00:49:02.599537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.383 [2024-07-26 00:49:02.689632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.763 test_start 00:06:33.763 oneshot 00:06:33.763 tick 100 00:06:33.763 tick 100 00:06:33.763 tick 250 00:06:33.763 tick 100 00:06:33.763 tick 100 00:06:33.763 tick 100 00:06:33.763 tick 250 00:06:33.763 tick 500 00:06:33.763 tick 100 00:06:33.763 tick 100 00:06:33.763 tick 250 00:06:33.763 tick 100 00:06:33.763 tick 100 00:06:33.763 test_end 00:06:33.763 00:06:33.763 real 0m1.251s 00:06:33.763 user 0m1.163s 00:06:33.763 sys 0m0.084s 00:06:33.763 00:49:03 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.763 00:49:03 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:33.763 ************************************ 00:06:33.763 END TEST event_reactor 00:06:33.763 ************************************ 00:06:33.763 00:49:03 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:33.763 00:49:03 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:33.763 00:49:03 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.763 00:49:03 event -- common/autotest_common.sh@10 -- # set +x 00:06:33.763 ************************************ 00:06:33.763 START TEST event_reactor_perf 00:06:33.763 ************************************ 00:06:33.763 00:49:03 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:33.763 [2024-07-26 00:49:03.834581] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:06:33.763 [2024-07-26 00:49:03.834648] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1702489 ] 00:06:33.763 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.763 [2024-07-26 00:49:03.898376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.763 [2024-07-26 00:49:03.988066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.702 test_start 00:06:34.702 test_end 00:06:34.702 Performance: 353359 events per second 00:06:34.702 00:06:34.702 real 0m1.249s 00:06:34.702 user 0m1.167s 00:06:34.702 sys 0m0.077s 00:06:34.702 00:49:05 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.702 00:49:05 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:34.702 ************************************ 00:06:34.702 END TEST event_reactor_perf 00:06:34.702 ************************************ 00:06:34.702 00:49:05 event -- event/event.sh@49 -- # uname -s 00:06:34.702 00:49:05 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:34.702 00:49:05 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:34.702 00:49:05 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.702 00:49:05 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.702 00:49:05 event -- common/autotest_common.sh@10 -- # set +x 00:06:34.702 ************************************ 00:06:34.702 START TEST event_scheduler 00:06:34.702 ************************************ 00:06:34.702 00:49:05 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:34.960 * Looking for test storage... 00:06:34.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:34.960 00:49:05 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:34.960 00:49:05 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1702673 00:06:34.960 00:49:05 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:34.960 00:49:05 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:34.960 00:49:05 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1702673 00:06:34.960 00:49:05 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 1702673 ']' 00:06:34.960 00:49:05 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.960 00:49:05 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.960 00:49:05 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.960 00:49:05 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.960 00:49:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:34.960 [2024-07-26 00:49:05.211842] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:06:34.960 [2024-07-26 00:49:05.211927] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1702673 ] 00:06:34.960 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.960 [2024-07-26 00:49:05.270141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:34.960 [2024-07-26 00:49:05.364281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.960 [2024-07-26 00:49:05.364338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.960 [2024-07-26 00:49:05.364403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:34.960 [2024-07-26 00:49:05.364407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.220 00:49:05 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.220 00:49:05 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:35.220 00:49:05 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:35.220 00:49:05 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.220 00:49:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:35.220 [2024-07-26 00:49:05.433235] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:35.220 [2024-07-26 00:49:05.433262] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:35.220 [2024-07-26 00:49:05.433279] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:35.220 [2024-07-26 00:49:05.433291] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:35.220 [2024-07-26 00:49:05.433303] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:35.220 00:49:05 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.220 00:49:05 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:35.220 00:49:05 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.220 00:49:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:35.220 [2024-07-26 00:49:05.527563] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:35.220 00:49:05 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.220 00:49:05 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:35.220 00:49:05 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.220 00:49:05 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.220 00:49:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:35.220 ************************************ 00:06:35.220 START TEST scheduler_create_thread 00:06:35.220 ************************************ 00:06:35.220 00:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:35.220 00:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:35.220 00:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.220 00:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.220 2 00:06:35.220 00:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.220 00:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:35.220 00:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.220 00:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.220 3 00:06:35.220 00:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.220 00:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:35.220 00:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.220 00:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.220 4 00:06:35.220 00:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.220 00:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:35.220 00:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.220 00:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.220 5 00:06:35.220 00:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.221 00:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:35.221 00:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.221 00:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.221 6 00:06:35.221 00:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.221 00:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:35.221 00:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.221 00:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.221 7 00:06:35.221 00:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.221 00:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:35.221 00:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.221 00:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.221 8 00:06:35.221 00:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.221 00:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:35.221 00:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.221 00:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.221 9 00:06:35.221 00:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.221 00:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:35.221 00:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.221 00:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.221 10 00:06:35.221 00:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.221 00:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:35.221 00:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.221 00:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.221 00:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.221 00:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:35.221 00:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:35.221 00:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.221 00:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.480 00:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.480 00:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:35.480 00:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.480 00:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.480 00:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.480 00:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:35.480 00:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:35.481 00:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.481 00:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.417 00:49:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.417 00:06:36.417 real 0m1.173s 00:06:36.417 user 0m0.011s 00:06:36.417 sys 0m0.003s 00:06:36.417 00:49:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.417 00:49:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.417 ************************************ 00:06:36.417 END TEST scheduler_create_thread 00:06:36.418 ************************************ 00:06:36.418 00:49:06 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:36.418 00:49:06 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1702673 00:06:36.418 00:49:06 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 1702673 ']' 00:06:36.418 00:49:06 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 1702673 00:06:36.418 00:49:06 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:36.418 00:49:06 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:36.418 00:49:06 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1702673 00:06:36.418 00:49:06 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:36.418 00:49:06 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:36.418 00:49:06 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1702673' 00:06:36.418 killing process with pid 1702673 00:06:36.418 00:49:06 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 1702673 00:06:36.418 00:49:06 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 1702673 00:06:36.984 [2024-07-26 00:49:07.209557] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:37.242 00:06:37.243 real 0m2.303s 00:06:37.243 user 0m2.732s 00:06:37.243 sys 0m0.311s 00:06:37.243 00:49:07 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.243 00:49:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:37.243 ************************************ 00:06:37.243 END TEST event_scheduler 00:06:37.243 ************************************ 00:06:37.243 00:49:07 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:37.243 00:49:07 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:37.243 00:49:07 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.243 00:49:07 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.243 00:49:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:37.243 ************************************ 00:06:37.243 START TEST app_repeat 00:06:37.243 ************************************ 00:06:37.243 00:49:07 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:37.243 00:49:07 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.243 00:49:07 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.243 00:49:07 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:37.243 00:49:07 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:37.243 00:49:07 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:37.243 00:49:07 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:37.243 00:49:07 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:37.243 00:49:07 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1702988 00:06:37.243 00:49:07 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:37.243 00:49:07 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:37.243 00:49:07 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1702988' 00:06:37.243 Process app_repeat pid: 1702988 00:06:37.243 00:49:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:37.243 00:49:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:37.243 spdk_app_start Round 0 00:06:37.243 00:49:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1702988 /var/tmp/spdk-nbd.sock 00:06:37.243 00:49:07 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1702988 ']' 00:06:37.243 00:49:07 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:37.243 00:49:07 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.243 00:49:07 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:37.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:37.243 00:49:07 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.243 00:49:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:37.243 [2024-07-26 00:49:07.500877] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:06:37.243 [2024-07-26 00:49:07.500944] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1702988 ] 00:06:37.243 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.243 [2024-07-26 00:49:07.563558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:37.243 [2024-07-26 00:49:07.655613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.243 [2024-07-26 00:49:07.655618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.500 00:49:07 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.500 00:49:07 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:37.500 00:49:07 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:37.757 Malloc0 00:06:37.757 00:49:08 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:38.015 Malloc1 00:06:38.015 00:49:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:38.015 00:49:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.015 00:49:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.015 00:49:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:38.015 00:49:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.015 00:49:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:38.015 00:49:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:38.015 00:49:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.015 00:49:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.015 00:49:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:38.015 00:49:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.015 00:49:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:38.015 00:49:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:38.015 00:49:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:38.015 00:49:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.015 00:49:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:38.314 /dev/nbd0 00:06:38.314 00:49:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:38.314 00:49:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:38.314 00:49:08 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:38.314 00:49:08 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:38.314 00:49:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:38.314 00:49:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:38.314 00:49:08 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:38.314 00:49:08 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:38.314 00:49:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:38.314 00:49:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:38.314 00:49:08 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:38.314 1+0 records in 00:06:38.314 1+0 records out 00:06:38.314 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000167902 s, 24.4 MB/s 00:06:38.314 00:49:08 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:38.314 00:49:08 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:38.314 00:49:08 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:38.314 00:49:08 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:38.314 00:49:08 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:38.314 00:49:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:38.314 00:49:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.314 00:49:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:38.572 /dev/nbd1 00:06:38.572 00:49:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:38.572 00:49:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:38.572 00:49:08 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:38.572 00:49:08 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:38.572 00:49:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:38.572 00:49:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:38.572 00:49:08 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:38.572 00:49:08 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:38.572 00:49:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:38.572 00:49:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:38.572 00:49:08 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:38.572 1+0 records in 00:06:38.572 1+0 records out 00:06:38.572 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228442 s, 17.9 MB/s 00:06:38.572 00:49:08 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:38.572 00:49:08 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:38.572 00:49:08 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:38.572 00:49:08 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:38.572 00:49:08 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:38.572 00:49:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:38.572 00:49:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.572 00:49:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:38.572 00:49:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.572 00:49:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:38.830 00:49:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:38.830 { 00:06:38.830 "nbd_device": "/dev/nbd0", 00:06:38.830 "bdev_name": "Malloc0" 00:06:38.830 }, 00:06:38.830 { 00:06:38.830 "nbd_device": "/dev/nbd1", 00:06:38.830 "bdev_name": "Malloc1" 00:06:38.830 } 00:06:38.830 ]' 00:06:38.830 00:49:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:38.830 { 00:06:38.830 "nbd_device": "/dev/nbd0", 00:06:38.830 "bdev_name": "Malloc0" 00:06:38.830 }, 00:06:38.830 { 00:06:38.830 "nbd_device": "/dev/nbd1", 00:06:38.830 "bdev_name": "Malloc1" 00:06:38.830 } 00:06:38.830 ]' 00:06:38.830 00:49:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:38.830 00:49:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:38.830 /dev/nbd1' 00:06:38.830 00:49:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:38.830 /dev/nbd1' 00:06:38.830 00:49:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:38.830 00:49:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:38.830 00:49:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:38.830 00:49:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:38.830 00:49:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:38.830 00:49:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:38.830 00:49:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.830 00:49:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:38.830 00:49:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:38.830 00:49:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:38.830 00:49:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:38.830 00:49:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:38.830 256+0 records in 00:06:38.830 256+0 records out 00:06:38.830 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00499361 s, 210 MB/s 00:06:38.830 00:49:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:38.830 00:49:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:38.830 256+0 records in 00:06:38.830 256+0 records out 00:06:38.830 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0211216 s, 49.6 MB/s 00:06:38.830 00:49:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:38.830 00:49:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:38.830 256+0 records in 00:06:38.830 256+0 records out 00:06:38.830 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0228313 s, 45.9 MB/s 00:06:38.830 00:49:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:38.830 00:49:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.830 00:49:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:38.830 00:49:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:38.830 00:49:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:38.830 00:49:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:38.830 00:49:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:38.830 00:49:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.830 00:49:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:38.830 00:49:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.830 00:49:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:38.830 00:49:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:38.830 00:49:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:38.830 00:49:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.830 00:49:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.830 00:49:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:38.830 00:49:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:38.830 00:49:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.830 00:49:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:39.088 00:49:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:39.088 00:49:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:39.088 00:49:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:39.088 00:49:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.088 00:49:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.088 00:49:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:39.088 00:49:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:39.088 00:49:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.088 00:49:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.088 00:49:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:39.346 00:49:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:39.346 00:49:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:39.346 00:49:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:39.346 00:49:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.346 00:49:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.346 00:49:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:39.346 00:49:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:39.346 00:49:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.346 00:49:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:39.346 00:49:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.346 00:49:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:39.604 00:49:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:39.604 00:49:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:39.604 00:49:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.861 00:49:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:39.861 00:49:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:39.861 00:49:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.861 00:49:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:39.861 00:49:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:39.861 00:49:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:39.861 00:49:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:39.861 00:49:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:39.861 00:49:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:39.861 00:49:10 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:40.119 00:49:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:40.377 [2024-07-26 00:49:10.568126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:40.377 [2024-07-26 00:49:10.657925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.377 [2024-07-26 00:49:10.657926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.377 [2024-07-26 00:49:10.719143] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:40.377 [2024-07-26 00:49:10.719211] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:42.914 00:49:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:42.914 00:49:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:42.914 spdk_app_start Round 1 00:06:42.914 00:49:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1702988 /var/tmp/spdk-nbd.sock 00:06:42.914 00:49:13 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1702988 ']' 00:06:42.914 00:49:13 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:42.914 00:49:13 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:42.914 00:49:13 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:42.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:42.914 00:49:13 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:42.914 00:49:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:43.480 00:49:13 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:43.480 00:49:13 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:43.480 00:49:13 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:43.480 Malloc0 00:06:43.480 00:49:13 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:43.737 Malloc1 00:06:43.737 00:49:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:43.737 00:49:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.737 00:49:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:43.737 00:49:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:43.737 00:49:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.737 00:49:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:43.737 00:49:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:43.737 00:49:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.737 00:49:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:43.737 00:49:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:43.737 00:49:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.737 00:49:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:43.737 00:49:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:43.737 00:49:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:43.737 00:49:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:43.737 00:49:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:43.995 /dev/nbd0 00:06:43.995 00:49:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:43.995 00:49:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:43.995 00:49:14 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:43.996 00:49:14 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:43.996 00:49:14 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:43.996 00:49:14 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:43.996 00:49:14 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:43.996 00:49:14 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:43.996 00:49:14 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:43.996 00:49:14 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:43.996 00:49:14 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:43.996 1+0 records in 00:06:43.996 1+0 records out 00:06:43.996 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000179859 s, 22.8 MB/s 00:06:43.996 00:49:14 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:43.996 00:49:14 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:43.996 00:49:14 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:43.996 00:49:14 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:43.996 00:49:14 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:43.996 00:49:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:43.996 00:49:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:43.996 00:49:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:44.254 /dev/nbd1 00:06:44.254 00:49:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:44.254 00:49:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:44.254 00:49:14 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:44.254 00:49:14 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:44.254 00:49:14 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:44.254 00:49:14 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:44.254 00:49:14 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:44.254 00:49:14 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:44.254 00:49:14 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:44.254 00:49:14 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:44.254 00:49:14 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:44.254 1+0 records in 00:06:44.254 1+0 records out 00:06:44.254 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187946 s, 21.8 MB/s 00:06:44.513 00:49:14 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:44.513 00:49:14 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:44.513 00:49:14 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:44.513 00:49:14 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:44.513 00:49:14 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:44.513 00:49:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:44.513 00:49:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.513 00:49:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:44.513 00:49:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.513 00:49:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:44.513 00:49:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:44.513 { 00:06:44.513 "nbd_device": "/dev/nbd0", 00:06:44.513 "bdev_name": "Malloc0" 00:06:44.513 }, 00:06:44.513 { 00:06:44.513 "nbd_device": "/dev/nbd1", 00:06:44.513 "bdev_name": "Malloc1" 00:06:44.513 } 00:06:44.513 ]' 00:06:44.513 00:49:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:44.513 { 00:06:44.513 "nbd_device": "/dev/nbd0", 00:06:44.513 "bdev_name": "Malloc0" 00:06:44.513 }, 00:06:44.513 { 00:06:44.513 "nbd_device": "/dev/nbd1", 00:06:44.513 "bdev_name": "Malloc1" 00:06:44.513 } 00:06:44.513 ]' 00:06:44.513 00:49:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:44.772 00:49:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:44.772 /dev/nbd1' 00:06:44.772 00:49:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:44.772 /dev/nbd1' 00:06:44.772 00:49:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:44.772 00:49:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:44.772 00:49:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:44.772 00:49:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:44.772 00:49:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:44.772 00:49:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:44.772 00:49:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.772 00:49:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:44.772 00:49:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:44.772 00:49:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:44.772 00:49:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:44.772 00:49:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:44.772 256+0 records in 00:06:44.772 256+0 records out 00:06:44.772 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00531696 s, 197 MB/s 00:06:44.772 00:49:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:44.772 00:49:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:44.772 256+0 records in 00:06:44.772 256+0 records out 00:06:44.772 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0276663 s, 37.9 MB/s 00:06:44.772 00:49:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:44.772 00:49:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:44.772 256+0 records in 00:06:44.772 256+0 records out 00:06:44.772 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0230678 s, 45.5 MB/s 00:06:44.772 00:49:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:44.772 00:49:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.772 00:49:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:44.772 00:49:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:44.772 00:49:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:44.772 00:49:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:44.772 00:49:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:44.772 00:49:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:44.772 00:49:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:44.772 00:49:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:44.773 00:49:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:44.773 00:49:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:44.773 00:49:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:44.773 00:49:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.773 00:49:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.773 00:49:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:44.773 00:49:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:44.773 00:49:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.773 00:49:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:45.031 00:49:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:45.031 00:49:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:45.031 00:49:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:45.031 00:49:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.031 00:49:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.031 00:49:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:45.031 00:49:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:45.031 00:49:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.031 00:49:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.031 00:49:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:45.289 00:49:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:45.289 00:49:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:45.289 00:49:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:45.289 00:49:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.289 00:49:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.289 00:49:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:45.289 00:49:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:45.289 00:49:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.289 00:49:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:45.289 00:49:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.289 00:49:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:45.547 00:49:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:45.547 00:49:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:45.547 00:49:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:45.547 00:49:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:45.547 00:49:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:45.547 00:49:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:45.547 00:49:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:45.547 00:49:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:45.547 00:49:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:45.547 00:49:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:45.548 00:49:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:45.548 00:49:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:45.548 00:49:15 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:45.807 00:49:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:46.067 [2024-07-26 00:49:16.391550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:46.067 [2024-07-26 00:49:16.480040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.067 [2024-07-26 00:49:16.480044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.348 [2024-07-26 00:49:16.542653] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:46.348 [2024-07-26 00:49:16.542731] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:48.904 00:49:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:48.904 00:49:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:48.904 spdk_app_start Round 2 00:06:48.904 00:49:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1702988 /var/tmp/spdk-nbd.sock 00:06:48.904 00:49:19 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1702988 ']' 00:06:48.904 00:49:19 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:48.904 00:49:19 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.904 00:49:19 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:48.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:48.904 00:49:19 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.904 00:49:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:49.161 00:49:19 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:49.161 00:49:19 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:49.161 00:49:19 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:49.418 Malloc0 00:06:49.418 00:49:19 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:49.676 Malloc1 00:06:49.676 00:49:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:49.676 00:49:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.676 00:49:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:49.676 00:49:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:49.676 00:49:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.676 00:49:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:49.676 00:49:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:49.676 00:49:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.676 00:49:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:49.676 00:49:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:49.676 00:49:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.676 00:49:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:49.676 00:49:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:49.676 00:49:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:49.676 00:49:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.676 00:49:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:49.934 /dev/nbd0 00:06:49.934 00:49:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:49.934 00:49:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:49.934 00:49:20 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:49.934 00:49:20 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:49.934 00:49:20 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:49.934 00:49:20 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:49.934 00:49:20 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:49.934 00:49:20 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:49.934 00:49:20 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:49.934 00:49:20 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:49.934 00:49:20 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:49.934 1+0 records in 00:06:49.934 1+0 records out 00:06:49.934 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197013 s, 20.8 MB/s 00:06:49.934 00:49:20 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:49.934 00:49:20 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:49.934 00:49:20 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:49.934 00:49:20 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:49.934 00:49:20 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:49.934 00:49:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.934 00:49:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.934 00:49:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:50.192 /dev/nbd1 00:06:50.192 00:49:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:50.192 00:49:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:50.192 00:49:20 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:50.192 00:49:20 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:50.192 00:49:20 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:50.192 00:49:20 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:50.192 00:49:20 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:50.192 00:49:20 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:50.192 00:49:20 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:50.192 00:49:20 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:50.192 00:49:20 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:50.192 1+0 records in 00:06:50.192 1+0 records out 00:06:50.192 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213978 s, 19.1 MB/s 00:06:50.192 00:49:20 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:50.192 00:49:20 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:50.192 00:49:20 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:50.192 00:49:20 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:50.192 00:49:20 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:50.192 00:49:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:50.192 00:49:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:50.192 00:49:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:50.192 00:49:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.192 00:49:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:50.450 00:49:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:50.450 { 00:06:50.450 "nbd_device": "/dev/nbd0", 00:06:50.450 "bdev_name": "Malloc0" 00:06:50.450 }, 00:06:50.450 { 00:06:50.450 "nbd_device": "/dev/nbd1", 00:06:50.450 "bdev_name": "Malloc1" 00:06:50.450 } 00:06:50.450 ]' 00:06:50.450 00:49:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:50.450 { 00:06:50.450 "nbd_device": "/dev/nbd0", 00:06:50.450 "bdev_name": "Malloc0" 00:06:50.450 }, 00:06:50.450 { 00:06:50.450 "nbd_device": "/dev/nbd1", 00:06:50.450 "bdev_name": "Malloc1" 00:06:50.450 } 00:06:50.450 ]' 00:06:50.450 00:49:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.450 00:49:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:50.450 /dev/nbd1' 00:06:50.450 00:49:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:50.450 /dev/nbd1' 00:06:50.450 00:49:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.450 00:49:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:50.450 00:49:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:50.450 00:49:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:50.450 00:49:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:50.450 00:49:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:50.450 00:49:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.450 00:49:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:50.450 00:49:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:50.450 00:49:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:50.450 00:49:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:50.450 00:49:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:50.450 256+0 records in 00:06:50.450 256+0 records out 00:06:50.450 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00496905 s, 211 MB/s 00:06:50.450 00:49:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.450 00:49:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:50.450 256+0 records in 00:06:50.450 256+0 records out 00:06:50.450 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242787 s, 43.2 MB/s 00:06:50.450 00:49:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.450 00:49:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:50.450 256+0 records in 00:06:50.450 256+0 records out 00:06:50.450 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0290597 s, 36.1 MB/s 00:06:50.708 00:49:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:50.708 00:49:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.708 00:49:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:50.708 00:49:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:50.708 00:49:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:50.708 00:49:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:50.708 00:49:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:50.708 00:49:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:50.708 00:49:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:50.708 00:49:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:50.708 00:49:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:50.708 00:49:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:50.708 00:49:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:50.708 00:49:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.708 00:49:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.708 00:49:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:50.708 00:49:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:50.708 00:49:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.708 00:49:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:50.968 00:49:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:50.968 00:49:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:50.968 00:49:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:50.968 00:49:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.968 00:49:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.968 00:49:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:50.968 00:49:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:50.968 00:49:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.968 00:49:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.968 00:49:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:51.227 00:49:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:51.227 00:49:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:51.227 00:49:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:51.227 00:49:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.227 00:49:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.227 00:49:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:51.227 00:49:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:51.227 00:49:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.227 00:49:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:51.227 00:49:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.227 00:49:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:51.484 00:49:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:51.484 00:49:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:51.484 00:49:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:51.484 00:49:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:51.484 00:49:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:51.484 00:49:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:51.484 00:49:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:51.484 00:49:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:51.484 00:49:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:51.485 00:49:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:51.485 00:49:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:51.485 00:49:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:51.485 00:49:21 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:51.743 00:49:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:52.002 [2024-07-26 00:49:22.217886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:52.002 [2024-07-26 00:49:22.306957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.002 [2024-07-26 00:49:22.306961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.002 [2024-07-26 00:49:22.365255] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:52.002 [2024-07-26 00:49:22.365320] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:55.294 00:49:24 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1702988 /var/tmp/spdk-nbd.sock 00:06:55.294 00:49:24 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1702988 ']' 00:06:55.294 00:49:24 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:55.294 00:49:24 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.294 00:49:24 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:55.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:55.294 00:49:24 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.294 00:49:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:55.294 00:49:25 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:55.294 00:49:25 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:55.294 00:49:25 event.app_repeat -- event/event.sh@39 -- # killprocess 1702988 00:06:55.294 00:49:25 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 1702988 ']' 00:06:55.294 00:49:25 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 1702988 00:06:55.294 00:49:25 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:55.294 00:49:25 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:55.294 00:49:25 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1702988 00:06:55.294 00:49:25 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:55.294 00:49:25 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:55.294 00:49:25 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1702988' 00:06:55.294 killing process with pid 1702988 00:06:55.294 00:49:25 event.app_repeat -- common/autotest_common.sh@969 -- # kill 1702988 00:06:55.294 00:49:25 event.app_repeat -- common/autotest_common.sh@974 -- # wait 1702988 00:06:55.294 spdk_app_start is called in Round 0. 00:06:55.294 Shutdown signal received, stop current app iteration 00:06:55.294 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 reinitialization... 00:06:55.294 spdk_app_start is called in Round 1. 00:06:55.294 Shutdown signal received, stop current app iteration 00:06:55.294 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 reinitialization... 00:06:55.294 spdk_app_start is called in Round 2. 00:06:55.294 Shutdown signal received, stop current app iteration 00:06:55.294 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 reinitialization... 00:06:55.294 spdk_app_start is called in Round 3. 00:06:55.294 Shutdown signal received, stop current app iteration 00:06:55.294 00:49:25 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:55.294 00:49:25 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:55.294 00:06:55.294 real 0m17.978s 00:06:55.294 user 0m39.250s 00:06:55.294 sys 0m3.163s 00:06:55.294 00:49:25 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.294 00:49:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:55.294 ************************************ 00:06:55.294 END TEST app_repeat 00:06:55.294 ************************************ 00:06:55.294 00:49:25 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:55.294 00:49:25 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:55.294 00:49:25 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:55.294 00:49:25 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.294 00:49:25 event -- common/autotest_common.sh@10 -- # set +x 00:06:55.294 ************************************ 00:06:55.294 START TEST cpu_locks 00:06:55.294 ************************************ 00:06:55.294 00:49:25 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:55.294 * Looking for test storage... 00:06:55.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:55.294 00:49:25 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:55.294 00:49:25 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:55.294 00:49:25 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:55.294 00:49:25 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:55.294 00:49:25 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:55.294 00:49:25 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.294 00:49:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.294 ************************************ 00:06:55.294 START TEST default_locks 00:06:55.294 ************************************ 00:06:55.294 00:49:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:55.294 00:49:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1705342 00:06:55.294 00:49:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:55.294 00:49:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1705342 00:06:55.294 00:49:25 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1705342 ']' 00:06:55.294 00:49:25 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.294 00:49:25 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.294 00:49:25 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.294 00:49:25 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.294 00:49:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.294 [2024-07-26 00:49:25.632080] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:06:55.294 [2024-07-26 00:49:25.632158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1705342 ] 00:06:55.294 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.294 [2024-07-26 00:49:25.688694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.553 [2024-07-26 00:49:25.778376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.811 00:49:26 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:55.811 00:49:26 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:55.811 00:49:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1705342 00:06:55.811 00:49:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1705342 00:06:55.811 00:49:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:56.069 lslocks: write error 00:06:56.069 00:49:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1705342 00:06:56.069 00:49:26 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 1705342 ']' 00:06:56.069 00:49:26 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 1705342 00:06:56.069 00:49:26 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:56.069 00:49:26 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:56.069 00:49:26 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1705342 00:06:56.069 00:49:26 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:56.069 00:49:26 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:56.069 00:49:26 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1705342' 00:06:56.069 killing process with pid 1705342 00:06:56.069 00:49:26 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 1705342 00:06:56.069 00:49:26 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 1705342 00:06:56.635 00:49:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1705342 00:06:56.635 00:49:26 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:56.635 00:49:26 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1705342 00:06:56.635 00:49:26 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:56.635 00:49:26 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:56.635 00:49:26 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:56.635 00:49:26 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:56.635 00:49:26 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 1705342 00:06:56.635 00:49:26 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1705342 ']' 00:06:56.635 00:49:26 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.635 00:49:26 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:56.635 00:49:26 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.635 00:49:26 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:56.635 00:49:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1705342) - No such process 00:06:56.636 ERROR: process (pid: 1705342) is no longer running 00:06:56.636 00:49:26 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.636 00:49:26 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:56.636 00:49:26 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:56.636 00:49:26 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:56.636 00:49:26 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:56.636 00:49:26 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:56.636 00:49:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:56.636 00:49:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:56.636 00:49:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:56.636 00:49:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:56.636 00:06:56.636 real 0m1.196s 00:06:56.636 user 0m1.108s 00:06:56.636 sys 0m0.536s 00:06:56.636 00:49:26 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.636 00:49:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.636 ************************************ 00:06:56.636 END TEST default_locks 00:06:56.636 ************************************ 00:06:56.636 00:49:26 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:56.636 00:49:26 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:56.636 00:49:26 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.636 00:49:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.636 ************************************ 00:06:56.636 START TEST default_locks_via_rpc 00:06:56.636 ************************************ 00:06:56.636 00:49:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:56.636 00:49:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1705544 00:06:56.636 00:49:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:56.636 00:49:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1705544 00:06:56.636 00:49:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1705544 ']' 00:06:56.636 00:49:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.636 00:49:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:56.636 00:49:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.636 00:49:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:56.636 00:49:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.636 [2024-07-26 00:49:26.874050] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:06:56.636 [2024-07-26 00:49:26.874159] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1705544 ] 00:06:56.636 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.636 [2024-07-26 00:49:26.936814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.636 [2024-07-26 00:49:27.026829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.894 00:49:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.894 00:49:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:56.894 00:49:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:56.894 00:49:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.894 00:49:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.894 00:49:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.894 00:49:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:56.894 00:49:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:56.894 00:49:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:56.894 00:49:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:56.894 00:49:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:56.894 00:49:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.894 00:49:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.894 00:49:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.894 00:49:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1705544 00:06:56.895 00:49:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1705544 00:06:56.895 00:49:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:57.460 00:49:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1705544 00:06:57.460 00:49:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 1705544 ']' 00:06:57.460 00:49:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 1705544 00:06:57.460 00:49:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:57.460 00:49:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:57.460 00:49:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1705544 00:06:57.460 00:49:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:57.460 00:49:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:57.460 00:49:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1705544' 00:06:57.460 killing process with pid 1705544 00:06:57.460 00:49:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 1705544 00:06:57.460 00:49:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 1705544 00:06:57.719 00:06:57.719 real 0m1.219s 00:06:57.719 user 0m1.150s 00:06:57.719 sys 0m0.524s 00:06:57.719 00:49:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.719 00:49:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.719 ************************************ 00:06:57.719 END TEST default_locks_via_rpc 00:06:57.719 ************************************ 00:06:57.719 00:49:28 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:57.719 00:49:28 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:57.719 00:49:28 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.719 00:49:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.719 ************************************ 00:06:57.719 START TEST non_locking_app_on_locked_coremask 00:06:57.719 ************************************ 00:06:57.719 00:49:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:57.719 00:49:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1705787 00:06:57.719 00:49:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:57.719 00:49:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1705787 /var/tmp/spdk.sock 00:06:57.719 00:49:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1705787 ']' 00:06:57.719 00:49:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.719 00:49:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:57.719 00:49:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.719 00:49:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:57.719 00:49:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.719 [2024-07-26 00:49:28.140769] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:06:57.719 [2024-07-26 00:49:28.140857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1705787 ] 00:06:57.979 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.979 [2024-07-26 00:49:28.198291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.979 [2024-07-26 00:49:28.287562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.236 00:49:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.236 00:49:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:58.236 00:49:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1705795 00:06:58.236 00:49:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:58.236 00:49:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1705795 /var/tmp/spdk2.sock 00:06:58.236 00:49:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1705795 ']' 00:06:58.236 00:49:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:58.236 00:49:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.236 00:49:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:58.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:58.236 00:49:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.236 00:49:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.236 [2024-07-26 00:49:28.585361] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:06:58.236 [2024-07-26 00:49:28.585438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1705795 ] 00:06:58.236 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.496 [2024-07-26 00:49:28.676836] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:58.496 [2024-07-26 00:49:28.676870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.496 [2024-07-26 00:49:28.860678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.430 00:49:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:59.430 00:49:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:59.430 00:49:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1705787 00:06:59.430 00:49:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1705787 00:06:59.430 00:49:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:59.690 lslocks: write error 00:06:59.690 00:49:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1705787 00:06:59.690 00:49:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1705787 ']' 00:06:59.690 00:49:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1705787 00:06:59.690 00:49:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:59.690 00:49:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:59.690 00:49:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1705787 00:06:59.690 00:49:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:59.690 00:49:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:59.690 00:49:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1705787' 00:06:59.690 killing process with pid 1705787 00:06:59.690 00:49:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1705787 00:06:59.690 00:49:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1705787 00:07:00.627 00:49:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1705795 00:07:00.627 00:49:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1705795 ']' 00:07:00.627 00:49:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1705795 00:07:00.627 00:49:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:00.627 00:49:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:00.627 00:49:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1705795 00:07:00.627 00:49:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:00.627 00:49:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:00.627 00:49:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1705795' 00:07:00.627 killing process with pid 1705795 00:07:00.627 00:49:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1705795 00:07:00.627 00:49:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1705795 00:07:00.884 00:07:00.884 real 0m3.196s 00:07:00.884 user 0m3.334s 00:07:00.884 sys 0m1.073s 00:07:00.884 00:49:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.884 00:49:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.884 ************************************ 00:07:00.884 END TEST non_locking_app_on_locked_coremask 00:07:00.884 ************************************ 00:07:00.884 00:49:31 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:00.884 00:49:31 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.884 00:49:31 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.884 00:49:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:01.142 ************************************ 00:07:01.142 START TEST locking_app_on_unlocked_coremask 00:07:01.142 ************************************ 00:07:01.142 00:49:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:01.142 00:49:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1706109 00:07:01.142 00:49:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:01.142 00:49:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1706109 /var/tmp/spdk.sock 00:07:01.142 00:49:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1706109 ']' 00:07:01.142 00:49:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.142 00:49:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:01.142 00:49:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.142 00:49:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:01.142 00:49:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.142 [2024-07-26 00:49:31.385496] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:07:01.142 [2024-07-26 00:49:31.385614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1706109 ] 00:07:01.142 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.142 [2024-07-26 00:49:31.449323] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:01.142 [2024-07-26 00:49:31.449387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.142 [2024-07-26 00:49:31.539932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.400 00:49:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:01.400 00:49:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:01.400 00:49:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1706234 00:07:01.400 00:49:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:01.400 00:49:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1706234 /var/tmp/spdk2.sock 00:07:01.400 00:49:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1706234 ']' 00:07:01.400 00:49:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:01.400 00:49:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:01.400 00:49:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:01.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:01.400 00:49:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:01.400 00:49:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.660 [2024-07-26 00:49:31.841557] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:07:01.660 [2024-07-26 00:49:31.841650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1706234 ] 00:07:01.660 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.660 [2024-07-26 00:49:31.938667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.920 [2024-07-26 00:49:32.125324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.486 00:49:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:02.486 00:49:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:02.486 00:49:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1706234 00:07:02.486 00:49:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1706234 00:07:02.486 00:49:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:02.745 lslocks: write error 00:07:02.745 00:49:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1706109 00:07:02.745 00:49:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1706109 ']' 00:07:02.745 00:49:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1706109 00:07:02.745 00:49:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:02.745 00:49:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:02.745 00:49:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1706109 00:07:03.004 00:49:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:03.004 00:49:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:03.004 00:49:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1706109' 00:07:03.004 killing process with pid 1706109 00:07:03.004 00:49:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1706109 00:07:03.004 00:49:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1706109 00:07:03.936 00:49:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1706234 00:07:03.936 00:49:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1706234 ']' 00:07:03.936 00:49:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1706234 00:07:03.936 00:49:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:03.936 00:49:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:03.936 00:49:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1706234 00:07:03.936 00:49:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:03.936 00:49:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:03.936 00:49:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1706234' 00:07:03.936 killing process with pid 1706234 00:07:03.936 00:49:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1706234 00:07:03.936 00:49:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1706234 00:07:04.194 00:07:04.194 real 0m3.125s 00:07:04.194 user 0m3.252s 00:07:04.194 sys 0m1.036s 00:07:04.194 00:49:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.194 00:49:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.194 ************************************ 00:07:04.194 END TEST locking_app_on_unlocked_coremask 00:07:04.194 ************************************ 00:07:04.194 00:49:34 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:04.194 00:49:34 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.194 00:49:34 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.194 00:49:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.194 ************************************ 00:07:04.194 START TEST locking_app_on_locked_coremask 00:07:04.194 ************************************ 00:07:04.194 00:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:04.194 00:49:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1706540 00:07:04.194 00:49:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:04.194 00:49:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1706540 /var/tmp/spdk.sock 00:07:04.194 00:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1706540 ']' 00:07:04.194 00:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.194 00:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.194 00:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.194 00:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.194 00:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.194 [2024-07-26 00:49:34.560892] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:07:04.194 [2024-07-26 00:49:34.560976] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1706540 ] 00:07:04.194 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.471 [2024-07-26 00:49:34.625146] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.471 [2024-07-26 00:49:34.715523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.744 00:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:04.744 00:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:04.744 00:49:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1706665 00:07:04.744 00:49:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:04.744 00:49:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1706665 /var/tmp/spdk2.sock 00:07:04.744 00:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:04.744 00:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1706665 /var/tmp/spdk2.sock 00:07:04.744 00:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:04.744 00:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.744 00:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:04.744 00:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.744 00:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1706665 /var/tmp/spdk2.sock 00:07:04.744 00:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1706665 ']' 00:07:04.744 00:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:04.744 00:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.744 00:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:04.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:04.744 00:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.744 00:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.744 [2024-07-26 00:49:35.021943] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:07:04.744 [2024-07-26 00:49:35.022021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1706665 ] 00:07:04.744 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.744 [2024-07-26 00:49:35.114074] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1706540 has claimed it. 00:07:04.744 [2024-07-26 00:49:35.114133] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:05.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1706665) - No such process 00:07:05.312 ERROR: process (pid: 1706665) is no longer running 00:07:05.312 00:49:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.312 00:49:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:05.312 00:49:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:05.312 00:49:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:05.312 00:49:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:05.312 00:49:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:05.312 00:49:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1706540 00:07:05.312 00:49:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1706540 00:07:05.312 00:49:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:05.879 lslocks: write error 00:07:05.879 00:49:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1706540 00:07:05.879 00:49:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1706540 ']' 00:07:05.879 00:49:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1706540 00:07:05.879 00:49:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:05.879 00:49:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:05.879 00:49:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1706540 00:07:05.879 00:49:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:05.879 00:49:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:05.879 00:49:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1706540' 00:07:05.879 killing process with pid 1706540 00:07:05.879 00:49:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1706540 00:07:05.879 00:49:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1706540 00:07:06.138 00:07:06.138 real 0m1.958s 00:07:06.138 user 0m2.100s 00:07:06.138 sys 0m0.638s 00:07:06.138 00:49:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.138 00:49:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.138 ************************************ 00:07:06.138 END TEST locking_app_on_locked_coremask 00:07:06.138 ************************************ 00:07:06.138 00:49:36 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:06.138 00:49:36 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.138 00:49:36 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.138 00:49:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.138 ************************************ 00:07:06.138 START TEST locking_overlapped_coremask 00:07:06.138 ************************************ 00:07:06.138 00:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:06.138 00:49:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1706835 00:07:06.138 00:49:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:06.138 00:49:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1706835 /var/tmp/spdk.sock 00:07:06.138 00:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1706835 ']' 00:07:06.138 00:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.138 00:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:06.138 00:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.138 00:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:06.138 00:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.396 [2024-07-26 00:49:36.572617] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:07:06.396 [2024-07-26 00:49:36.572705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1706835 ] 00:07:06.396 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.396 [2024-07-26 00:49:36.634218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:06.396 [2024-07-26 00:49:36.726372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.396 [2024-07-26 00:49:36.726425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.396 [2024-07-26 00:49:36.726442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.655 00:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:06.655 00:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:06.655 00:49:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1706851 00:07:06.655 00:49:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:06.655 00:49:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1706851 /var/tmp/spdk2.sock 00:07:06.655 00:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:06.655 00:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1706851 /var/tmp/spdk2.sock 00:07:06.655 00:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:06.655 00:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.655 00:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:06.655 00:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.655 00:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1706851 /var/tmp/spdk2.sock 00:07:06.655 00:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1706851 ']' 00:07:06.655 00:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:06.655 00:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:06.655 00:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:06.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:06.655 00:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:06.655 00:49:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.655 [2024-07-26 00:49:37.028764] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:07:06.655 [2024-07-26 00:49:37.028881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1706851 ] 00:07:06.655 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.915 [2024-07-26 00:49:37.122778] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1706835 has claimed it. 00:07:06.915 [2024-07-26 00:49:37.122846] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:07.485 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1706851) - No such process 00:07:07.485 ERROR: process (pid: 1706851) is no longer running 00:07:07.485 00:49:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:07.485 00:49:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:07.485 00:49:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:07.485 00:49:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:07.485 00:49:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:07.485 00:49:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:07.485 00:49:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:07.485 00:49:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:07.485 00:49:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:07.485 00:49:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:07.486 00:49:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1706835 00:07:07.486 00:49:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 1706835 ']' 00:07:07.486 00:49:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 1706835 00:07:07.486 00:49:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:07.486 00:49:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:07.486 00:49:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1706835 00:07:07.486 00:49:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:07.486 00:49:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:07.486 00:49:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1706835' 00:07:07.486 killing process with pid 1706835 00:07:07.486 00:49:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 1706835 00:07:07.486 00:49:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 1706835 00:07:07.745 00:07:07.745 real 0m1.631s 00:07:07.745 user 0m4.409s 00:07:07.745 sys 0m0.454s 00:07:07.745 00:49:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.745 00:49:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:07.745 ************************************ 00:07:07.745 END TEST locking_overlapped_coremask 00:07:07.745 ************************************ 00:07:08.004 00:49:38 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:08.004 00:49:38 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:08.004 00:49:38 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.004 00:49:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.004 ************************************ 00:07:08.004 START TEST locking_overlapped_coremask_via_rpc 00:07:08.004 ************************************ 00:07:08.004 00:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:08.004 00:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1707107 00:07:08.004 00:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:08.004 00:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1707107 /var/tmp/spdk.sock 00:07:08.004 00:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1707107 ']' 00:07:08.004 00:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.004 00:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.004 00:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.004 00:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.004 00:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.004 [2024-07-26 00:49:38.252897] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:07:08.004 [2024-07-26 00:49:38.252995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1707107 ] 00:07:08.004 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.004 [2024-07-26 00:49:38.315687] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:08.004 [2024-07-26 00:49:38.315728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:08.004 [2024-07-26 00:49:38.406938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.004 [2024-07-26 00:49:38.406992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.004 [2024-07-26 00:49:38.407010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.262 00:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:08.262 00:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:08.262 00:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1707139 00:07:08.262 00:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:08.262 00:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1707139 /var/tmp/spdk2.sock 00:07:08.262 00:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1707139 ']' 00:07:08.262 00:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:08.263 00:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.263 00:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:08.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:08.263 00:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.263 00:49:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.521 [2024-07-26 00:49:38.701304] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:07:08.521 [2024-07-26 00:49:38.701404] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1707139 ] 00:07:08.521 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.521 [2024-07-26 00:49:38.790570] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:08.521 [2024-07-26 00:49:38.790615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:08.780 [2024-07-26 00:49:38.961399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:08.780 [2024-07-26 00:49:38.965110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:08.780 [2024-07-26 00:49:38.965112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.347 00:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.347 00:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:09.347 00:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:09.347 00:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.347 00:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.347 00:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.347 00:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:09.347 00:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:09.347 00:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:09.347 00:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:09.347 00:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:09.347 00:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:09.347 00:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:09.347 00:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:09.347 00:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.347 00:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.347 [2024-07-26 00:49:39.639153] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1707107 has claimed it. 00:07:09.347 request: 00:07:09.347 { 00:07:09.347 "method": "framework_enable_cpumask_locks", 00:07:09.347 "req_id": 1 00:07:09.347 } 00:07:09.347 Got JSON-RPC error response 00:07:09.347 response: 00:07:09.347 { 00:07:09.347 "code": -32603, 00:07:09.347 "message": "Failed to claim CPU core: 2" 00:07:09.347 } 00:07:09.347 00:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:09.347 00:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:09.347 00:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:09.347 00:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:09.347 00:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:09.347 00:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1707107 /var/tmp/spdk.sock 00:07:09.347 00:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1707107 ']' 00:07:09.347 00:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.347 00:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.347 00:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.347 00:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.347 00:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.605 00:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.605 00:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:09.605 00:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1707139 /var/tmp/spdk2.sock 00:07:09.605 00:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1707139 ']' 00:07:09.605 00:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:09.605 00:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.605 00:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:09.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:09.605 00:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.605 00:49:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.865 00:49:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.865 00:49:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:09.865 00:49:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:09.865 00:49:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:09.865 00:49:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:09.865 00:49:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:09.865 00:07:09.865 real 0m1.979s 00:07:09.865 user 0m1.040s 00:07:09.865 sys 0m0.172s 00:07:09.865 00:49:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.865 00:49:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.865 ************************************ 00:07:09.865 END TEST locking_overlapped_coremask_via_rpc 00:07:09.865 ************************************ 00:07:09.865 00:49:40 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:09.865 00:49:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1707107 ]] 00:07:09.865 00:49:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1707107 00:07:09.865 00:49:40 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1707107 ']' 00:07:09.865 00:49:40 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1707107 00:07:09.865 00:49:40 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:09.865 00:49:40 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:09.865 00:49:40 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1707107 00:07:09.865 00:49:40 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:09.865 00:49:40 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:09.865 00:49:40 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1707107' 00:07:09.865 killing process with pid 1707107 00:07:09.865 00:49:40 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1707107 00:07:09.865 00:49:40 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1707107 00:07:10.432 00:49:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1707139 ]] 00:07:10.432 00:49:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1707139 00:07:10.432 00:49:40 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1707139 ']' 00:07:10.432 00:49:40 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1707139 00:07:10.432 00:49:40 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:10.432 00:49:40 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:10.432 00:49:40 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1707139 00:07:10.432 00:49:40 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:10.432 00:49:40 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:10.432 00:49:40 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1707139' 00:07:10.432 killing process with pid 1707139 00:07:10.432 00:49:40 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1707139 00:07:10.432 00:49:40 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1707139 00:07:10.690 00:49:41 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:10.690 00:49:41 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:10.690 00:49:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1707107 ]] 00:07:10.690 00:49:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1707107 00:07:10.690 00:49:41 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1707107 ']' 00:07:10.690 00:49:41 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1707107 00:07:10.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1707107) - No such process 00:07:10.690 00:49:41 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1707107 is not found' 00:07:10.690 Process with pid 1707107 is not found 00:07:10.690 00:49:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1707139 ]] 00:07:10.690 00:49:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1707139 00:07:10.690 00:49:41 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1707139 ']' 00:07:10.690 00:49:41 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1707139 00:07:10.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1707139) - No such process 00:07:10.690 00:49:41 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1707139 is not found' 00:07:10.690 Process with pid 1707139 is not found 00:07:10.690 00:49:41 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:10.690 00:07:10.690 real 0m15.572s 00:07:10.690 user 0m27.219s 00:07:10.690 sys 0m5.359s 00:07:10.690 00:49:41 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.690 00:49:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.690 ************************************ 00:07:10.690 END TEST cpu_locks 00:07:10.690 ************************************ 00:07:10.690 00:07:10.690 real 0m39.965s 00:07:10.690 user 1m15.825s 00:07:10.690 sys 0m9.329s 00:07:10.690 00:49:41 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.690 00:49:41 event -- common/autotest_common.sh@10 -- # set +x 00:07:10.690 ************************************ 00:07:10.690 END TEST event 00:07:10.690 ************************************ 00:07:10.948 00:49:41 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:10.949 00:49:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.949 00:49:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.949 00:49:41 -- common/autotest_common.sh@10 -- # set +x 00:07:10.949 ************************************ 00:07:10.949 START TEST thread 00:07:10.949 ************************************ 00:07:10.949 00:49:41 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:10.949 * Looking for test storage... 00:07:10.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:10.949 00:49:41 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:10.949 00:49:41 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:10.949 00:49:41 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.949 00:49:41 thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.949 ************************************ 00:07:10.949 START TEST thread_poller_perf 00:07:10.949 ************************************ 00:07:10.949 00:49:41 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:10.949 [2024-07-26 00:49:41.225718] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:07:10.949 [2024-07-26 00:49:41.225785] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1707513 ] 00:07:10.949 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.949 [2024-07-26 00:49:41.286872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.207 [2024-07-26 00:49:41.376353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.207 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:12.145 ====================================== 00:07:12.145 busy:2717123779 (cyc) 00:07:12.145 total_run_count: 292000 00:07:12.145 tsc_hz: 2700000000 (cyc) 00:07:12.145 ====================================== 00:07:12.145 poller_cost: 9305 (cyc), 3446 (nsec) 00:07:12.145 00:07:12.145 real 0m1.256s 00:07:12.145 user 0m1.169s 00:07:12.145 sys 0m0.081s 00:07:12.145 00:49:42 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.145 00:49:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:12.145 ************************************ 00:07:12.145 END TEST thread_poller_perf 00:07:12.145 ************************************ 00:07:12.145 00:49:42 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:12.145 00:49:42 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:12.145 00:49:42 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.145 00:49:42 thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.145 ************************************ 00:07:12.145 START TEST thread_poller_perf 00:07:12.145 ************************************ 00:07:12.145 00:49:42 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:12.145 [2024-07-26 00:49:42.528652] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:07:12.145 [2024-07-26 00:49:42.528706] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1707666 ] 00:07:12.145 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.405 [2024-07-26 00:49:42.590708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.405 [2024-07-26 00:49:42.686053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.405 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:13.343 ====================================== 00:07:13.343 busy:2702413062 (cyc) 00:07:13.343 total_run_count: 3859000 00:07:13.343 tsc_hz: 2700000000 (cyc) 00:07:13.343 ====================================== 00:07:13.343 poller_cost: 700 (cyc), 259 (nsec) 00:07:13.601 00:07:13.601 real 0m1.252s 00:07:13.601 user 0m1.163s 00:07:13.601 sys 0m0.083s 00:07:13.601 00:49:43 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.601 00:49:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:13.601 ************************************ 00:07:13.601 END TEST thread_poller_perf 00:07:13.601 ************************************ 00:07:13.601 00:49:43 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:13.601 00:07:13.601 real 0m2.649s 00:07:13.601 user 0m2.386s 00:07:13.601 sys 0m0.263s 00:07:13.601 00:49:43 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.601 00:49:43 thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.601 ************************************ 00:07:13.601 END TEST thread 00:07:13.601 ************************************ 00:07:13.601 00:49:43 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:07:13.601 00:49:43 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:13.601 00:49:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:13.601 00:49:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.601 00:49:43 -- common/autotest_common.sh@10 -- # set +x 00:07:13.601 ************************************ 00:07:13.601 START TEST app_cmdline 00:07:13.601 ************************************ 00:07:13.601 00:49:43 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:13.601 * Looking for test storage... 00:07:13.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:13.601 00:49:43 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:13.601 00:49:43 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1707951 00:07:13.601 00:49:43 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:13.601 00:49:43 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1707951 00:07:13.601 00:49:43 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 1707951 ']' 00:07:13.601 00:49:43 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.601 00:49:43 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:13.601 00:49:43 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.601 00:49:43 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:13.601 00:49:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:13.601 [2024-07-26 00:49:43.942788] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:07:13.601 [2024-07-26 00:49:43.942871] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1707951 ] 00:07:13.601 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.601 [2024-07-26 00:49:43.998309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.860 [2024-07-26 00:49:44.083294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.120 00:49:44 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.120 00:49:44 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:14.120 00:49:44 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:14.381 { 00:07:14.381 "version": "SPDK v24.09-pre git sha1 704257090", 00:07:14.381 "fields": { 00:07:14.381 "major": 24, 00:07:14.381 "minor": 9, 00:07:14.381 "patch": 0, 00:07:14.381 "suffix": "-pre", 00:07:14.381 "commit": "704257090" 00:07:14.381 } 00:07:14.381 } 00:07:14.381 00:49:44 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:14.381 00:49:44 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:14.381 00:49:44 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:14.381 00:49:44 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:14.381 00:49:44 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:14.381 00:49:44 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.381 00:49:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:14.381 00:49:44 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:14.381 00:49:44 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:14.381 00:49:44 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.381 00:49:44 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:14.381 00:49:44 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:14.381 00:49:44 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:14.381 00:49:44 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:14.381 00:49:44 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:14.381 00:49:44 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:14.381 00:49:44 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:14.381 00:49:44 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:14.381 00:49:44 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:14.381 00:49:44 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:14.381 00:49:44 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:14.381 00:49:44 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:14.381 00:49:44 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:14.381 00:49:44 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:14.640 request: 00:07:14.640 { 00:07:14.640 "method": "env_dpdk_get_mem_stats", 00:07:14.640 "req_id": 1 00:07:14.640 } 00:07:14.640 Got JSON-RPC error response 00:07:14.640 response: 00:07:14.640 { 00:07:14.640 "code": -32601, 00:07:14.640 "message": "Method not found" 00:07:14.640 } 00:07:14.640 00:49:44 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:14.640 00:49:44 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:14.640 00:49:44 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:14.640 00:49:44 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:14.640 00:49:44 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1707951 00:07:14.640 00:49:44 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 1707951 ']' 00:07:14.640 00:49:44 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 1707951 00:07:14.640 00:49:44 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:14.640 00:49:44 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:14.640 00:49:44 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1707951 00:07:14.640 00:49:44 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:14.640 00:49:44 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:14.640 00:49:44 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1707951' 00:07:14.640 killing process with pid 1707951 00:07:14.640 00:49:44 app_cmdline -- common/autotest_common.sh@969 -- # kill 1707951 00:07:14.640 00:49:44 app_cmdline -- common/autotest_common.sh@974 -- # wait 1707951 00:07:14.898 00:07:14.898 real 0m1.453s 00:07:14.898 user 0m1.774s 00:07:14.898 sys 0m0.453s 00:07:14.898 00:49:45 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.898 00:49:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:14.898 ************************************ 00:07:14.898 END TEST app_cmdline 00:07:14.898 ************************************ 00:07:14.898 00:49:45 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:14.898 00:49:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:14.898 00:49:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.898 00:49:45 -- common/autotest_common.sh@10 -- # set +x 00:07:15.156 ************************************ 00:07:15.156 START TEST version 00:07:15.156 ************************************ 00:07:15.156 00:49:45 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:15.156 * Looking for test storage... 00:07:15.156 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:15.156 00:49:45 version -- app/version.sh@17 -- # get_header_version major 00:07:15.156 00:49:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:15.156 00:49:45 version -- app/version.sh@14 -- # cut -f2 00:07:15.156 00:49:45 version -- app/version.sh@14 -- # tr -d '"' 00:07:15.156 00:49:45 version -- app/version.sh@17 -- # major=24 00:07:15.156 00:49:45 version -- app/version.sh@18 -- # get_header_version minor 00:07:15.156 00:49:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:15.156 00:49:45 version -- app/version.sh@14 -- # cut -f2 00:07:15.156 00:49:45 version -- app/version.sh@14 -- # tr -d '"' 00:07:15.156 00:49:45 version -- app/version.sh@18 -- # minor=9 00:07:15.156 00:49:45 version -- app/version.sh@19 -- # get_header_version patch 00:07:15.156 00:49:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:15.156 00:49:45 version -- app/version.sh@14 -- # cut -f2 00:07:15.156 00:49:45 version -- app/version.sh@14 -- # tr -d '"' 00:07:15.156 00:49:45 version -- app/version.sh@19 -- # patch=0 00:07:15.156 00:49:45 version -- app/version.sh@20 -- # get_header_version suffix 00:07:15.156 00:49:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:15.156 00:49:45 version -- app/version.sh@14 -- # cut -f2 00:07:15.156 00:49:45 version -- app/version.sh@14 -- # tr -d '"' 00:07:15.156 00:49:45 version -- app/version.sh@20 -- # suffix=-pre 00:07:15.156 00:49:45 version -- app/version.sh@22 -- # version=24.9 00:07:15.156 00:49:45 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:15.156 00:49:45 version -- app/version.sh@28 -- # version=24.9rc0 00:07:15.156 00:49:45 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:15.156 00:49:45 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:15.156 00:49:45 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:15.156 00:49:45 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:15.156 00:07:15.156 real 0m0.108s 00:07:15.156 user 0m0.055s 00:07:15.156 sys 0m0.074s 00:07:15.156 00:49:45 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.156 00:49:45 version -- common/autotest_common.sh@10 -- # set +x 00:07:15.156 ************************************ 00:07:15.156 END TEST version 00:07:15.156 ************************************ 00:07:15.156 00:49:45 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:07:15.156 00:49:45 -- spdk/autotest.sh@202 -- # uname -s 00:07:15.156 00:49:45 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:07:15.156 00:49:45 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:07:15.156 00:49:45 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:07:15.156 00:49:45 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:07:15.156 00:49:45 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:07:15.156 00:49:45 -- spdk/autotest.sh@264 -- # timing_exit lib 00:07:15.156 00:49:45 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:15.156 00:49:45 -- common/autotest_common.sh@10 -- # set +x 00:07:15.156 00:49:45 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:07:15.156 00:49:45 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:07:15.156 00:49:45 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:07:15.156 00:49:45 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:07:15.156 00:49:45 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:07:15.156 00:49:45 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:07:15.156 00:49:45 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:15.156 00:49:45 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:15.157 00:49:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.157 00:49:45 -- common/autotest_common.sh@10 -- # set +x 00:07:15.157 ************************************ 00:07:15.157 START TEST nvmf_tcp 00:07:15.157 ************************************ 00:07:15.157 00:49:45 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:15.157 * Looking for test storage... 00:07:15.157 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:15.157 00:49:45 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:15.157 00:49:45 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:15.157 00:49:45 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:15.157 00:49:45 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:15.157 00:49:45 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.157 00:49:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:15.415 ************************************ 00:07:15.415 START TEST nvmf_target_core 00:07:15.415 ************************************ 00:07:15.415 00:49:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:15.415 * Looking for test storage... 00:07:15.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:15.415 00:49:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:15.415 00:49:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:15.415 00:49:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:15.415 00:49:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:15.416 ************************************ 00:07:15.416 START TEST nvmf_abort 00:07:15.416 ************************************ 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:15.416 * Looking for test storage... 00:07:15.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:15.416 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:15.417 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:15.417 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:15.417 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:15.417 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:15.417 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:15.417 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:15.417 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:15.417 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:15.417 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:15.417 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:15.417 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:15.417 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:15.417 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:15.417 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:15.417 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:15.417 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:07:15.417 00:49:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:17.319 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:17.319 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:07:17.319 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:17.319 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:17.319 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:17.320 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:17.320 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:17.320 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:07:17.320 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:17.320 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:07:17.320 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:07:17.320 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:07:17.320 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:07:17.320 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:07:17.320 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:07:17.320 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:17.320 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:17.320 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:17.320 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:17.320 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:17.320 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:17.320 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:17.320 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:17.320 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:17.320 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:17.320 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:17.320 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:17.320 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:17.320 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:17.320 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:17.320 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:17.320 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:17.320 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:17.320 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:17.320 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:17.579 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:17.579 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:17.579 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:17.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:17.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:07:17.579 00:07:17.579 --- 10.0.0.2 ping statistics --- 00:07:17.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.579 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:17.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:17.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:07:17.579 00:07:17.579 --- 10.0.0.1 ping statistics --- 00:07:17.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.579 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1709902 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:17.579 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1709902 00:07:17.580 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1709902 ']' 00:07:17.580 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.580 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:17.580 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.580 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:17.580 00:49:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:17.580 [2024-07-26 00:49:47.960365] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:07:17.580 [2024-07-26 00:49:47.960467] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:17.580 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.838 [2024-07-26 00:49:48.031128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:17.838 [2024-07-26 00:49:48.122835] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:17.838 [2024-07-26 00:49:48.122899] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:17.839 [2024-07-26 00:49:48.122926] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:17.839 [2024-07-26 00:49:48.122940] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:17.839 [2024-07-26 00:49:48.122951] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:17.839 [2024-07-26 00:49:48.123077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.839 [2024-07-26 00:49:48.123194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:17.839 [2024-07-26 00:49:48.123198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.839 00:49:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:17.839 00:49:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:07:17.839 00:49:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:17.839 00:49:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:17.839 00:49:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:17.839 00:49:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:17.839 00:49:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:17.839 00:49:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.839 00:49:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:18.099 [2024-07-26 00:49:48.267812] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:18.099 00:49:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.099 00:49:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:18.099 00:49:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.099 00:49:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:18.099 Malloc0 00:07:18.099 00:49:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.099 00:49:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:18.099 00:49:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.099 00:49:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:18.099 Delay0 00:07:18.099 00:49:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.099 00:49:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:18.099 00:49:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.099 00:49:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:18.099 00:49:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.099 00:49:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:18.099 00:49:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.099 00:49:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:18.099 00:49:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.099 00:49:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:18.099 00:49:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.099 00:49:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:18.099 [2024-07-26 00:49:48.341165] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:18.099 00:49:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.099 00:49:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:18.099 00:49:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.099 00:49:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:18.099 00:49:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.099 00:49:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:18.099 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.099 [2024-07-26 00:49:48.487186] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:20.638 Initializing NVMe Controllers 00:07:20.638 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:20.638 controller IO queue size 128 less than required 00:07:20.638 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:20.638 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:20.638 Initialization complete. Launching workers. 00:07:20.638 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 33268 00:07:20.638 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33333, failed to submit 62 00:07:20.638 success 33272, unsuccess 61, failed 0 00:07:20.638 00:49:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:20.639 00:49:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.639 00:49:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:20.639 00:49:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.639 00:49:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:20.639 00:49:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:20.639 00:49:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:20.639 00:49:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:20.639 00:49:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:20.639 00:49:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:20.639 00:49:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:20.639 00:49:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:20.639 rmmod nvme_tcp 00:07:20.639 rmmod nvme_fabrics 00:07:20.639 rmmod nvme_keyring 00:07:20.639 00:49:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:20.639 00:49:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:20.639 00:49:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:20.639 00:49:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1709902 ']' 00:07:20.639 00:49:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1709902 00:07:20.639 00:49:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1709902 ']' 00:07:20.639 00:49:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1709902 00:07:20.639 00:49:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:07:20.639 00:49:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:20.639 00:49:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1709902 00:07:20.639 00:49:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:20.639 00:49:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:20.639 00:49:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1709902' 00:07:20.639 killing process with pid 1709902 00:07:20.639 00:49:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1709902 00:07:20.639 00:49:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1709902 00:07:20.639 00:49:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:20.639 00:49:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:20.639 00:49:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:20.639 00:49:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:20.639 00:49:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:20.639 00:49:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.639 00:49:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:20.639 00:49:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.653 00:49:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:22.653 00:07:22.653 real 0m7.318s 00:07:22.653 user 0m10.803s 00:07:22.653 sys 0m2.486s 00:07:22.653 00:49:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.653 00:49:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:22.653 ************************************ 00:07:22.653 END TEST nvmf_abort 00:07:22.653 ************************************ 00:07:22.653 00:49:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:22.653 00:49:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:22.653 00:49:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.653 00:49:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:22.653 ************************************ 00:07:22.653 START TEST nvmf_ns_hotplug_stress 00:07:22.653 ************************************ 00:07:22.653 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:22.911 * Looking for test storage... 00:07:22.911 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:07:22.911 00:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:24.815 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:24.815 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:24.815 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:24.815 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:24.815 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:24.816 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:24.816 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:24.816 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:24.816 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:24.816 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:24.816 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:24.816 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:24.816 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:24.816 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:24.816 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:24.816 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:24.816 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:24.816 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:24.816 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:24.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:24.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:07:24.816 00:07:24.816 --- 10.0.0.2 ping statistics --- 00:07:24.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:24.816 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:07:24.816 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:24.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:24.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:07:24.816 00:07:24.816 --- 10.0.0.1 ping statistics --- 00:07:24.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:24.816 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:07:24.816 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:24.816 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:07:24.816 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:24.816 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:24.816 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:24.816 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:24.816 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:24.816 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:24.816 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:24.816 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:24.816 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:24.816 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:24.816 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:24.816 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1712161 00:07:24.816 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:24.816 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1712161 00:07:24.816 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1712161 ']' 00:07:24.816 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.816 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:24.816 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.816 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:24.816 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:25.076 [2024-07-26 00:49:55.254700] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:07:25.076 [2024-07-26 00:49:55.254797] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.076 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.076 [2024-07-26 00:49:55.323836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:25.076 [2024-07-26 00:49:55.413914] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:25.076 [2024-07-26 00:49:55.413978] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:25.076 [2024-07-26 00:49:55.414005] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:25.076 [2024-07-26 00:49:55.414019] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:25.076 [2024-07-26 00:49:55.414031] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:25.076 [2024-07-26 00:49:55.414134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.076 [2024-07-26 00:49:55.414250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:25.076 [2024-07-26 00:49:55.414253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.334 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:25.334 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:07:25.334 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:25.335 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:25.335 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:25.335 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:25.335 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:25.335 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:25.593 [2024-07-26 00:49:55.775178] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:25.593 00:49:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:25.852 00:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:26.109 [2024-07-26 00:49:56.314801] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:26.109 00:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:26.376 00:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:26.638 Malloc0 00:07:26.638 00:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:26.897 Delay0 00:07:26.897 00:49:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.156 00:49:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:27.416 NULL1 00:07:27.416 00:49:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:27.675 00:49:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1712547 00:07:27.675 00:49:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:27.675 00:49:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712547 00:07:27.675 00:49:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.675 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.610 Read completed with error (sct=0, sc=11) 00:07:28.610 00:49:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.610 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:28.869 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:28.869 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:28.869 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:28.869 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:28.869 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:28.869 00:49:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:28.869 00:49:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:29.127 true 00:07:29.127 00:49:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712547 00:07:29.127 00:49:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.066 00:50:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.325 00:50:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:30.325 00:50:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:30.583 true 00:07:30.583 00:50:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712547 00:07:30.583 00:50:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.842 00:50:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.100 00:50:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:31.100 00:50:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:31.358 true 00:07:31.358 00:50:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712547 00:07:31.358 00:50:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.617 00:50:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.877 00:50:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:31.877 00:50:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:31.878 true 00:07:31.878 00:50:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712547 00:07:31.878 00:50:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.256 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.257 00:50:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.257 00:50:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:33.257 00:50:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:33.514 true 00:07:33.514 00:50:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712547 00:07:33.514 00:50:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.772 00:50:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.030 00:50:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:34.030 00:50:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:34.288 true 00:07:34.288 00:50:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712547 00:07:34.288 00:50:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.220 00:50:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.478 00:50:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:35.478 00:50:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:35.736 true 00:07:35.736 00:50:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712547 00:07:35.736 00:50:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.994 00:50:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.252 00:50:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:36.252 00:50:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:36.511 true 00:07:36.511 00:50:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712547 00:07:36.511 00:50:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.447 00:50:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.447 00:50:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:37.448 00:50:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:37.706 true 00:07:37.706 00:50:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712547 00:07:37.706 00:50:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.965 00:50:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.223 00:50:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:38.223 00:50:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:38.480 true 00:07:38.480 00:50:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712547 00:07:38.480 00:50:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.458 00:50:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.716 00:50:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:39.716 00:50:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:39.974 true 00:07:39.974 00:50:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712547 00:07:39.974 00:50:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.232 00:50:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.490 00:50:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:40.490 00:50:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:40.747 true 00:07:40.747 00:50:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712547 00:07:40.747 00:50:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.687 00:50:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.687 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.687 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.687 00:50:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:41.687 00:50:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:41.945 true 00:07:41.945 00:50:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712547 00:07:41.945 00:50:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.202 00:50:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.460 00:50:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:42.460 00:50:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:42.717 true 00:07:42.717 00:50:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712547 00:07:42.717 00:50:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.653 00:50:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.910 00:50:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:43.910 00:50:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:44.167 true 00:07:44.167 00:50:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712547 00:07:44.167 00:50:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.425 00:50:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.682 00:50:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:44.682 00:50:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:44.940 true 00:07:44.940 00:50:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712547 00:07:44.940 00:50:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.878 00:50:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.878 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.136 00:50:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:46.136 00:50:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:46.394 true 00:07:46.394 00:50:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712547 00:07:46.394 00:50:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.651 00:50:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.908 00:50:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:46.908 00:50:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:47.165 true 00:07:47.165 00:50:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712547 00:07:47.165 00:50:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.102 00:50:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.102 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.102 00:50:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:48.102 00:50:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:48.360 true 00:07:48.360 00:50:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712547 00:07:48.360 00:50:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.618 00:50:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.876 00:50:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:48.876 00:50:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:49.134 true 00:07:49.134 00:50:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712547 00:07:49.134 00:50:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.069 00:50:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.327 00:50:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:50.327 00:50:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:50.585 true 00:07:50.585 00:50:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712547 00:07:50.585 00:50:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.843 00:50:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.102 00:50:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:51.102 00:50:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:51.360 true 00:07:51.360 00:50:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712547 00:07:51.360 00:50:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.618 00:50:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.876 00:50:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:51.876 00:50:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:52.133 true 00:07:52.133 00:50:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712547 00:07:52.133 00:50:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.067 00:50:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.067 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:53.325 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:53.325 00:50:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:53.325 00:50:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:53.582 true 00:07:53.583 00:50:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712547 00:07:53.583 00:50:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.841 00:50:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.099 00:50:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:54.099 00:50:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:54.357 true 00:07:54.357 00:50:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712547 00:07:54.357 00:50:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.350 00:50:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.350 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.350 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.607 00:50:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:55.607 00:50:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:55.865 true 00:07:55.865 00:50:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712547 00:07:55.865 00:50:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.123 00:50:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.381 00:50:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:56.381 00:50:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:56.639 true 00:07:56.639 00:50:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712547 00:07:56.639 00:50:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.576 00:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.576 00:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:57.576 00:50:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:57.834 Initializing NVMe Controllers 00:07:57.834 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:57.834 Controller IO queue size 128, less than required. 00:07:57.834 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:57.834 Controller IO queue size 128, less than required. 00:07:57.834 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:57.834 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:57.834 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:57.834 Initialization complete. Launching workers. 00:07:57.834 ======================================================== 00:07:57.834 Latency(us) 00:07:57.834 Device Information : IOPS MiB/s Average min max 00:07:57.834 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 830.86 0.41 81192.22 2503.68 1065504.49 00:07:57.834 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11198.48 5.47 11431.44 1736.31 454768.69 00:07:57.834 ======================================================== 00:07:57.834 Total : 12029.34 5.87 16249.80 1736.31 1065504.49 00:07:57.834 00:07:57.834 true 00:07:57.834 00:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1712547 00:07:57.834 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1712547) - No such process 00:07:57.834 00:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1712547 00:07:57.834 00:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.092 00:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:58.350 00:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:58.350 00:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:58.350 00:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:58.350 00:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:58.350 00:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:58.608 null0 00:07:58.608 00:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:58.608 00:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:58.608 00:50:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:58.866 null1 00:07:58.866 00:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:58.866 00:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:58.866 00:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:59.125 null2 00:07:59.125 00:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:59.125 00:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:59.125 00:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:59.383 null3 00:07:59.383 00:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:59.383 00:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:59.383 00:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:59.641 null4 00:07:59.641 00:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:59.641 00:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:59.641 00:50:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:59.900 null5 00:07:59.900 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:59.900 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:59.900 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:00.158 null6 00:08:00.158 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:00.158 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:00.158 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:00.417 null7 00:08:00.417 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:00.417 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:00.417 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:00.417 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.417 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:00.417 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:00.417 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:00.417 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:00.417 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.417 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:00.417 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.417 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:00.417 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:00.417 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:00.417 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:00.417 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:00.417 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.417 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:00.417 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.417 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:00.417 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:00.417 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:00.417 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:00.417 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:00.417 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.417 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:00.417 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.417 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:00.417 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:00.417 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:00.417 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:00.417 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:00.417 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.417 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:00.417 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.417 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:00.417 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:00.417 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:00.417 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:00.418 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:00.418 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.418 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:00.418 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.418 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:00.418 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:00.418 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:00.418 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:00.418 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:00.418 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.418 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:00.418 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.418 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:00.418 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:00.418 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:00.418 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:00.418 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:00.418 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.418 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:00.418 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.418 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:00.418 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:00.418 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:00.418 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:00.418 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:00.418 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.418 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:00.418 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1716493 1716494 1716496 1716498 1716500 1716502 1716504 1716506 00:08:00.418 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.418 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:00.676 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:00.676 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:00.676 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:00.676 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.676 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:00.676 00:50:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:00.676 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:00.676 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:00.934 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.934 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.934 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:00.934 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.934 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.934 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:00.934 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.934 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.934 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:00.934 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.934 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.934 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:00.934 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.934 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.934 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:00.934 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.934 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.934 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:00.934 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.934 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.934 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:00.934 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.934 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.934 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:01.192 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:01.192 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.192 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:01.192 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:01.192 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:01.192 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:01.192 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:01.192 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:01.450 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.450 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.450 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:01.450 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.450 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.450 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:01.450 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.450 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.450 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:01.450 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.450 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.450 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:01.450 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.450 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.450 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:01.450 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.450 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.450 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:01.450 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.450 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.450 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:01.450 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.450 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.450 00:50:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:01.709 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.709 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:01.709 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:01.709 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:01.709 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:01.709 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:01.709 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:01.709 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:01.967 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.967 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.967 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:01.967 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.967 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.967 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:01.967 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.967 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.967 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:01.967 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.967 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.967 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:01.967 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.967 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.967 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:01.967 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.967 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.967 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:01.967 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.967 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.967 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:01.967 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.967 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.967 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:02.225 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.225 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:02.225 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:02.225 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:02.225 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:02.225 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:02.225 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:02.225 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:02.483 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.483 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.483 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:02.483 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.483 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.483 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:02.483 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.484 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.484 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:02.484 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.484 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.484 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:02.484 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.484 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.484 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.484 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.484 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:02.484 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:02.743 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.743 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.743 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:02.743 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.743 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.743 00:50:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:02.743 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.002 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:03.002 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:03.002 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:03.002 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:03.002 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:03.002 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:03.002 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:03.260 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.260 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.260 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:03.260 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.260 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.260 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:03.260 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.260 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.260 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:03.260 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.260 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.260 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:03.260 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.260 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.260 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:03.260 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.260 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.260 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:03.260 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.260 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.260 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:03.260 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.260 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.260 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:03.517 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:03.517 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.517 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:03.517 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:03.517 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:03.517 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:03.517 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:03.517 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:03.775 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.775 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.775 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:03.775 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.775 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.775 00:50:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:03.775 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.775 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.775 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:03.775 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.775 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.775 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:03.775 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.775 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.775 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:03.775 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.775 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.775 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:03.775 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.775 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.775 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:03.775 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.775 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.775 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:04.034 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:04.034 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:04.034 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.034 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:04.034 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:04.034 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:04.034 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:04.034 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:04.292 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.292 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.292 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:04.292 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.292 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.292 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:04.292 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.292 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.292 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:04.292 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.292 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.292 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:04.292 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.292 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.292 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:04.292 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.292 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.292 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:04.292 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.292 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.292 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:04.292 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.292 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.292 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:04.550 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.550 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:04.550 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:04.551 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:04.551 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:04.551 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:04.551 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:04.551 00:50:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:04.808 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.808 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.808 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:04.808 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.808 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.808 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:04.808 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.808 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.808 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.808 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.808 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:04.808 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:04.808 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.808 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.808 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:04.808 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.808 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.808 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:04.808 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.808 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.808 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:04.808 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.808 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.808 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:05.066 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.066 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:05.066 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:05.066 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:05.066 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:05.066 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:05.066 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:05.066 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:05.324 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.324 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.324 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:05.324 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.324 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.324 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:05.324 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.324 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.324 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:05.324 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.324 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.324 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:05.324 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.324 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.324 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:05.324 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.324 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.324 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.324 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:05.324 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.324 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:05.324 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.324 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.324 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:05.581 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:05.582 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.582 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:05.582 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:05.582 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:05.582 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:05.582 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:05.582 00:50:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:05.839 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.840 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.840 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.840 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.840 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.840 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.840 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.840 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.840 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.840 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.840 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.840 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.840 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.840 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.840 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.840 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.840 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:05.840 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:05.840 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:05.840 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:05.840 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:05.840 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:05.840 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:05.840 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:05.840 rmmod nvme_tcp 00:08:05.840 rmmod nvme_fabrics 00:08:05.840 rmmod nvme_keyring 00:08:06.099 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:06.099 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:06.099 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:06.099 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1712161 ']' 00:08:06.099 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1712161 00:08:06.099 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1712161 ']' 00:08:06.099 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1712161 00:08:06.099 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:08:06.099 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:06.099 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1712161 00:08:06.099 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:06.099 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:06.099 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1712161' 00:08:06.099 killing process with pid 1712161 00:08:06.099 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1712161 00:08:06.099 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1712161 00:08:06.359 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:06.359 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:06.359 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:06.359 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:06.359 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:06.359 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.359 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:06.359 00:50:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.265 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:08.265 00:08:08.265 real 0m45.554s 00:08:08.265 user 3m27.468s 00:08:08.265 sys 0m16.369s 00:08:08.265 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:08.265 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:08.265 ************************************ 00:08:08.265 END TEST nvmf_ns_hotplug_stress 00:08:08.265 ************************************ 00:08:08.266 00:50:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:08.266 00:50:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:08.266 00:50:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.266 00:50:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:08.266 ************************************ 00:08:08.266 START TEST nvmf_delete_subsystem 00:08:08.266 ************************************ 00:08:08.266 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:08.266 * Looking for test storage... 00:08:08.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:08.266 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:08.266 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:08.266 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.266 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.266 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.266 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.266 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.266 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.266 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.266 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.266 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.266 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.266 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:08.266 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:08.266 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.266 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.266 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:08.266 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:08.266 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:08.526 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.526 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.526 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.526 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.526 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.526 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.526 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:08.526 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.526 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:08:08.526 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:08.526 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:08.526 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:08.526 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.526 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.526 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:08.526 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:08.526 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:08.526 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:08.526 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:08.526 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:08.526 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:08.526 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:08.526 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:08.526 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.526 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:08.526 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.526 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:08.526 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:08.526 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:08.526 00:50:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.430 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:10.430 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:10.430 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:10.430 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:10.430 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:10.430 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:10.430 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:10.430 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:10.430 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:10.430 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:08:10.430 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:10.430 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:08:10.430 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:10.430 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:08:10.430 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:10.430 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:10.430 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:10.430 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:10.430 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:10.430 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:10.430 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:10.430 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:10.430 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:10.431 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:10.431 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:10.431 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:10.431 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:10.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:10.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:08:10.431 00:08:10.431 --- 10.0.0.2 ping statistics --- 00:08:10.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.431 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:10.431 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:10.431 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:08:10.431 00:08:10.431 --- 10.0.0.1 ping statistics --- 00:08:10.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.431 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1719253 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1719253 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1719253 ']' 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:10.431 00:50:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.431 [2024-07-26 00:50:40.807469] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:08:10.432 [2024-07-26 00:50:40.807565] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.432 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.690 [2024-07-26 00:50:40.879623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:10.690 [2024-07-26 00:50:40.974474] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:10.690 [2024-07-26 00:50:40.974536] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:10.690 [2024-07-26 00:50:40.974562] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:10.690 [2024-07-26 00:50:40.974575] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:10.690 [2024-07-26 00:50:40.974588] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:10.690 [2024-07-26 00:50:40.976088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.690 [2024-07-26 00:50:40.976101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.690 00:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:10.690 00:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:08:10.690 00:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:10.690 00:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:10.690 00:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.950 00:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:10.950 00:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:10.950 00:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.950 00:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.950 [2024-07-26 00:50:41.123707] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:10.950 00:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.950 00:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:10.950 00:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.950 00:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.950 00:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.950 00:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:10.950 00:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.950 00:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.950 [2024-07-26 00:50:41.139931] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:10.950 00:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.950 00:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:10.950 00:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.950 00:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.950 NULL1 00:08:10.950 00:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.950 00:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:10.950 00:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.950 00:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.950 Delay0 00:08:10.950 00:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.950 00:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.950 00:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.950 00:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.950 00:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.950 00:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1719396 00:08:10.950 00:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:10.950 00:50:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:10.950 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.950 [2024-07-26 00:50:41.214612] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:12.882 00:50:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:12.882 00:50:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.882 00:50:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:13.142 Write completed with error (sct=0, sc=8) 00:08:13.142 Read completed with error (sct=0, sc=8) 00:08:13.142 starting I/O failed: -6 00:08:13.142 Read completed with error (sct=0, sc=8) 00:08:13.142 Read completed with error (sct=0, sc=8) 00:08:13.142 Read completed with error (sct=0, sc=8) 00:08:13.142 Read completed with error (sct=0, sc=8) 00:08:13.142 starting I/O failed: -6 00:08:13.142 Write completed with error (sct=0, sc=8) 00:08:13.142 Read completed with error (sct=0, sc=8) 00:08:13.142 Write completed with error (sct=0, sc=8) 00:08:13.142 Write completed with error (sct=0, sc=8) 00:08:13.142 starting I/O failed: -6 00:08:13.142 Write completed with error (sct=0, sc=8) 00:08:13.142 Write completed with error (sct=0, sc=8) 00:08:13.142 Read completed with error (sct=0, sc=8) 00:08:13.142 Read completed with error (sct=0, sc=8) 00:08:13.142 starting I/O failed: -6 00:08:13.142 Read completed with error (sct=0, sc=8) 00:08:13.142 Read completed with error (sct=0, sc=8) 00:08:13.142 Write completed with error (sct=0, sc=8) 00:08:13.142 Write completed with error (sct=0, sc=8) 00:08:13.142 starting I/O failed: -6 00:08:13.142 Read completed with error (sct=0, sc=8) 00:08:13.142 Read completed with error (sct=0, sc=8) 00:08:13.142 Write completed with error (sct=0, sc=8) 00:08:13.142 Write completed with error (sct=0, sc=8) 00:08:13.142 starting I/O failed: -6 00:08:13.142 Read completed with error (sct=0, sc=8) 00:08:13.142 Read completed with error (sct=0, sc=8) 00:08:13.142 Read completed with error (sct=0, sc=8) 00:08:13.142 Read completed with error (sct=0, sc=8) 00:08:13.142 starting I/O failed: -6 00:08:13.142 Read completed with error (sct=0, sc=8) 00:08:13.142 Read completed with error (sct=0, sc=8) 00:08:13.142 Write completed with error (sct=0, sc=8) 00:08:13.142 Read completed with error (sct=0, sc=8) 00:08:13.142 starting I/O failed: -6 00:08:13.142 Read completed with error (sct=0, sc=8) 00:08:13.142 Read completed with error (sct=0, sc=8) 00:08:13.142 Read completed with error (sct=0, sc=8) 00:08:13.142 Write completed with error (sct=0, sc=8) 00:08:13.142 starting I/O failed: -6 00:08:13.142 Read completed with error (sct=0, sc=8) 00:08:13.142 Read completed with error (sct=0, sc=8) 00:08:13.142 Write completed with error (sct=0, sc=8) 00:08:13.142 Read completed with error (sct=0, sc=8) 00:08:13.142 starting I/O failed: -6 00:08:13.142 Read completed with error (sct=0, sc=8) 00:08:13.142 Read completed with error (sct=0, sc=8) 00:08:13.142 Write completed with error (sct=0, sc=8) 00:08:13.142 Write completed with error (sct=0, sc=8) 00:08:13.142 starting I/O failed: -6 00:08:13.142 Read completed with error (sct=0, sc=8) 00:08:13.142 [2024-07-26 00:50:43.348610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0180 is same with the state(5) to be set 00:08:13.142 Read completed with error (sct=0, sc=8) 00:08:13.142 Read completed with error (sct=0, sc=8) 00:08:13.142 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 starting I/O failed: -6 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 starting I/O failed: -6 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 starting I/O failed: -6 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 starting I/O failed: -6 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 starting I/O failed: -6 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 starting I/O failed: -6 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 starting I/O failed: -6 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 starting I/O failed: -6 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 starting I/O failed: -6 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 starting I/O failed: -6 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 starting I/O failed: -6 00:08:13.143 [2024-07-26 00:50:43.349398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc6c4000c00 is same with the state(5) to be set 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Read completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:13.143 Write completed with error (sct=0, sc=8) 00:08:14.081 [2024-07-26 00:50:44.312480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfda30 is same with the state(5) to be set 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Write completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Write completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Write completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Write completed with error (sct=0, sc=8) 00:08:14.081 Write completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Write completed with error (sct=0, sc=8) 00:08:14.081 Write completed with error (sct=0, sc=8) 00:08:14.081 Write completed with error (sct=0, sc=8) 00:08:14.081 [2024-07-26 00:50:44.349414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1defe50 is same with the state(5) to be set 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Write completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Write completed with error (sct=0, sc=8) 00:08:14.081 Write completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Write completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Write completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 [2024-07-26 00:50:44.350238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df04b0 is same with the state(5) to be set 00:08:14.081 Write completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Write completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Write completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Write completed with error (sct=0, sc=8) 00:08:14.081 Write completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Write completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 [2024-07-26 00:50:44.351634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc6c400d000 is same with the state(5) to be set 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Write completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Write completed with error (sct=0, sc=8) 00:08:14.081 Write completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Write completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Write completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Write completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.081 Read completed with error (sct=0, sc=8) 00:08:14.082 Read completed with error (sct=0, sc=8) 00:08:14.082 Write completed with error (sct=0, sc=8) 00:08:14.082 Write completed with error (sct=0, sc=8) 00:08:14.082 Write completed with error (sct=0, sc=8) 00:08:14.082 Write completed with error (sct=0, sc=8) 00:08:14.082 Write completed with error (sct=0, sc=8) 00:08:14.082 Write completed with error (sct=0, sc=8) 00:08:14.082 Write completed with error (sct=0, sc=8) 00:08:14.082 [2024-07-26 00:50:44.352296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc6c400d660 is same with the state(5) to be set 00:08:14.082 Initializing NVMe Controllers 00:08:14.082 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:14.082 Controller IO queue size 128, less than required. 00:08:14.082 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:14.082 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:14.082 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:14.082 Initialization complete. Launching workers. 00:08:14.082 ======================================================== 00:08:14.082 Latency(us) 00:08:14.082 Device Information : IOPS MiB/s Average min max 00:08:14.082 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 165.22 0.08 906784.90 670.32 1013940.26 00:08:14.082 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 165.71 0.08 916432.33 418.52 2001522.36 00:08:14.082 ======================================================== 00:08:14.082 Total : 330.93 0.16 911615.84 418.52 2001522.36 00:08:14.082 00:08:14.082 [2024-07-26 00:50:44.352703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dfda30 (9): Bad file descriptor 00:08:14.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:14.082 00:50:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.082 00:50:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:14.082 00:50:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1719396 00:08:14.082 00:50:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:14.652 00:50:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:14.652 00:50:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1719396 00:08:14.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1719396) - No such process 00:08:14.652 00:50:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1719396 00:08:14.652 00:50:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:08:14.652 00:50:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1719396 00:08:14.652 00:50:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:08:14.652 00:50:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.652 00:50:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:08:14.652 00:50:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.652 00:50:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1719396 00:08:14.652 00:50:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:08:14.652 00:50:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:14.652 00:50:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:14.652 00:50:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:14.652 00:50:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:14.652 00:50:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.652 00:50:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:14.652 00:50:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.652 00:50:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:14.652 00:50:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.652 00:50:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:14.652 [2024-07-26 00:50:44.876359] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:14.652 00:50:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.652 00:50:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.652 00:50:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.652 00:50:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:14.652 00:50:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.652 00:50:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1719806 00:08:14.652 00:50:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:14.652 00:50:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:14.652 00:50:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1719806 00:08:14.652 00:50:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:14.652 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.652 [2024-07-26 00:50:44.938829] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:15.221 00:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:15.221 00:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1719806 00:08:15.221 00:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:15.480 00:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:15.480 00:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1719806 00:08:15.480 00:50:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:16.048 00:50:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:16.048 00:50:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1719806 00:08:16.048 00:50:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:16.615 00:50:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:16.615 00:50:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1719806 00:08:16.615 00:50:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:17.182 00:50:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:17.182 00:50:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1719806 00:08:17.182 00:50:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:17.750 00:50:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:17.750 00:50:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1719806 00:08:17.750 00:50:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:18.009 Initializing NVMe Controllers 00:08:18.009 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:18.009 Controller IO queue size 128, less than required. 00:08:18.009 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:18.009 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:18.009 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:18.009 Initialization complete. Launching workers. 00:08:18.009 ======================================================== 00:08:18.009 Latency(us) 00:08:18.009 Device Information : IOPS MiB/s Average min max 00:08:18.009 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003866.97 1000177.30 1042365.60 00:08:18.009 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006005.25 1000194.70 1042989.47 00:08:18.009 ======================================================== 00:08:18.009 Total : 256.00 0.12 1004936.11 1000177.30 1042989.47 00:08:18.009 00:08:18.009 00:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:18.009 00:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1719806 00:08:18.009 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1719806) - No such process 00:08:18.009 00:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1719806 00:08:18.009 00:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:18.009 00:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:18.009 00:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:18.009 00:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:08:18.009 00:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:18.009 00:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:08:18.009 00:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:18.009 00:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:18.009 rmmod nvme_tcp 00:08:18.269 rmmod nvme_fabrics 00:08:18.269 rmmod nvme_keyring 00:08:18.269 00:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:18.269 00:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:08:18.269 00:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:08:18.269 00:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1719253 ']' 00:08:18.269 00:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1719253 00:08:18.269 00:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1719253 ']' 00:08:18.269 00:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1719253 00:08:18.269 00:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:08:18.269 00:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:18.269 00:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1719253 00:08:18.269 00:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:18.269 00:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:18.269 00:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1719253' 00:08:18.269 killing process with pid 1719253 00:08:18.269 00:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1719253 00:08:18.269 00:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1719253 00:08:18.529 00:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:18.529 00:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:18.529 00:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:18.529 00:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:18.529 00:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:18.529 00:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.529 00:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:18.529 00:50:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.435 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:20.435 00:08:20.435 real 0m12.152s 00:08:20.435 user 0m27.853s 00:08:20.435 sys 0m2.889s 00:08:20.435 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.435 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.435 ************************************ 00:08:20.435 END TEST nvmf_delete_subsystem 00:08:20.435 ************************************ 00:08:20.435 00:50:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:20.435 00:50:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:20.435 00:50:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.435 00:50:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:20.435 ************************************ 00:08:20.435 START TEST nvmf_host_management 00:08:20.435 ************************************ 00:08:20.435 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:20.694 * Looking for test storage... 00:08:20.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:20.694 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:20.694 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:20.694 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:20.694 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:20.694 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:20.694 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:20.694 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:20.694 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:20.694 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:20.694 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:20.694 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:20.694 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:20.694 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:20.694 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:20.694 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:20.694 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:20.694 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:20.694 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:20.694 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:20.694 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.694 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.694 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.695 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.695 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.695 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.695 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:20.695 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.695 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:08:20.695 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:20.695 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:20.695 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:20.695 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:20.695 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:20.695 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:20.695 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:20.695 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:20.695 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:20.695 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:20.695 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:20.695 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:20.695 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:20.695 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:20.695 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:20.695 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:20.695 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.695 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.695 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.695 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:20.695 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:20.695 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:08:20.695 00:50:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:22.598 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:22.598 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:22.598 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:22.598 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:22.598 00:50:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:22.598 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:22.598 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:22.859 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:22.859 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:22.859 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:22.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:22.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:08:22.859 00:08:22.859 --- 10.0.0.2 ping statistics --- 00:08:22.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.859 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:08:22.859 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:22.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:22.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:08:22.859 00:08:22.859 --- 10.0.0.1 ping statistics --- 00:08:22.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.859 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:08:22.859 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:22.859 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:08:22.859 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:22.859 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:22.859 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:22.859 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:22.859 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:22.859 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:22.859 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:22.859 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:22.859 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:22.859 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:22.859 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:22.859 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:22.859 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:22.859 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1722150 00:08:22.859 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:22.859 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1722150 00:08:22.859 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1722150 ']' 00:08:22.859 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.859 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:22.859 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.859 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:22.859 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:22.859 [2024-07-26 00:50:53.143735] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:08:22.859 [2024-07-26 00:50:53.143830] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:22.859 EAL: No free 2048 kB hugepages reported on node 1 00:08:22.859 [2024-07-26 00:50:53.207710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:23.117 [2024-07-26 00:50:53.298742] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:23.118 [2024-07-26 00:50:53.298795] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:23.118 [2024-07-26 00:50:53.298824] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:23.118 [2024-07-26 00:50:53.298835] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:23.118 [2024-07-26 00:50:53.298845] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:23.118 [2024-07-26 00:50:53.299131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.118 [2024-07-26 00:50:53.299234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:23.118 [2024-07-26 00:50:53.299471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:23.118 [2024-07-26 00:50:53.299475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.118 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:23.118 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:23.118 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:23.118 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:23.118 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.118 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:23.118 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:23.118 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.118 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.118 [2024-07-26 00:50:53.457256] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:23.118 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.118 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:23.118 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:23.118 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.118 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:23.118 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:23.118 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:23.118 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.118 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.118 Malloc0 00:08:23.118 [2024-07-26 00:50:53.518267] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:23.118 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.118 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:23.118 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:23.118 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.376 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1722310 00:08:23.376 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1722310 /var/tmp/bdevperf.sock 00:08:23.376 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1722310 ']' 00:08:23.376 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:23.376 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:23.376 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:23.376 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:23.376 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:23.376 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:23.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:23.377 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:23.377 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:23.377 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:23.377 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.377 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:23.377 { 00:08:23.377 "params": { 00:08:23.377 "name": "Nvme$subsystem", 00:08:23.377 "trtype": "$TEST_TRANSPORT", 00:08:23.377 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:23.377 "adrfam": "ipv4", 00:08:23.377 "trsvcid": "$NVMF_PORT", 00:08:23.377 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:23.377 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:23.377 "hdgst": ${hdgst:-false}, 00:08:23.377 "ddgst": ${ddgst:-false} 00:08:23.377 }, 00:08:23.377 "method": "bdev_nvme_attach_controller" 00:08:23.377 } 00:08:23.377 EOF 00:08:23.377 )") 00:08:23.377 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:23.377 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:23.377 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:23.377 00:50:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:23.377 "params": { 00:08:23.377 "name": "Nvme0", 00:08:23.377 "trtype": "tcp", 00:08:23.377 "traddr": "10.0.0.2", 00:08:23.377 "adrfam": "ipv4", 00:08:23.377 "trsvcid": "4420", 00:08:23.377 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:23.377 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:23.377 "hdgst": false, 00:08:23.377 "ddgst": false 00:08:23.377 }, 00:08:23.377 "method": "bdev_nvme_attach_controller" 00:08:23.377 }' 00:08:23.377 [2024-07-26 00:50:53.596800] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:08:23.377 [2024-07-26 00:50:53.596870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1722310 ] 00:08:23.377 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.377 [2024-07-26 00:50:53.658529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.377 [2024-07-26 00:50:53.745566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.637 Running I/O for 10 seconds... 00:08:23.896 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:23.896 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:23.896 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:23.896 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.896 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.896 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.896 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:23.896 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:23.896 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:23.896 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:23.896 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:23.896 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:23.896 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:23.896 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:23.896 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:23.896 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:23.896 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.896 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.896 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.896 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:23.896 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:23.896 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:24.155 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:24.155 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:24.155 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:24.155 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:24.155 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.155 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.155 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.155 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:08:24.155 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:08:24.155 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:24.155 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:24.155 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:24.155 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:24.155 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.155 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.155 [2024-07-26 00:50:54.443374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:24.155 [2024-07-26 00:50:54.443461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.155 [2024-07-26 00:50:54.443490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:24.155 [2024-07-26 00:50:54.443506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.155 [2024-07-26 00:50:54.443519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:24.155 [2024-07-26 00:50:54.443533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.155 [2024-07-26 00:50:54.443547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:24.155 [2024-07-26 00:50:54.443560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.155 [2024-07-26 00:50:54.443573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b4000 is same with the state(5) to be set 00:08:24.155 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.155 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:24.155 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.155 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.155 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.155 00:50:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:24.155 [2024-07-26 00:50:54.454084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b4000 (9): Bad file descriptor 00:08:24.155 [2024-07-26 00:50:54.454184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.155 [2024-07-26 00:50:54.454210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.155 [2024-07-26 00:50:54.454235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.155 [2024-07-26 00:50:54.454250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.155 [2024-07-26 00:50:54.454266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.155 [2024-07-26 00:50:54.454280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.155 [2024-07-26 00:50:54.454297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.155 [2024-07-26 00:50:54.454310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.155 [2024-07-26 00:50:54.454326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.155 [2024-07-26 00:50:54.454339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.155 [2024-07-26 00:50:54.454354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.155 [2024-07-26 00:50:54.454386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.155 [2024-07-26 00:50:54.454417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.155 [2024-07-26 00:50:54.454430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.155 [2024-07-26 00:50:54.454444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.155 [2024-07-26 00:50:54.454457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.155 [2024-07-26 00:50:54.454472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.156 [2024-07-26 00:50:54.454484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.156 [2024-07-26 00:50:54.454499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.156 [2024-07-26 00:50:54.454511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.156 [2024-07-26 00:50:54.454526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.156 [2024-07-26 00:50:54.454539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.156 [2024-07-26 00:50:54.454553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.156 [2024-07-26 00:50:54.454566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.156 [2024-07-26 00:50:54.454581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.156 [2024-07-26 00:50:54.454594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.156 [2024-07-26 00:50:54.454609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.156 [2024-07-26 00:50:54.454623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.156 [2024-07-26 00:50:54.454638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.156 [2024-07-26 00:50:54.454651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.156 [2024-07-26 00:50:54.454668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.156 [2024-07-26 00:50:54.454681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.156 [2024-07-26 00:50:54.454695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.156 [2024-07-26 00:50:54.454709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.156 [2024-07-26 00:50:54.454724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.156 [2024-07-26 00:50:54.454737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.156 [2024-07-26 00:50:54.454755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.156 [2024-07-26 00:50:54.454768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.156 [2024-07-26 00:50:54.454782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.156 [2024-07-26 00:50:54.454795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.156 [2024-07-26 00:50:54.454810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.156 [2024-07-26 00:50:54.454823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.156 [2024-07-26 00:50:54.454837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.156 [2024-07-26 00:50:54.454850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.156 [2024-07-26 00:50:54.454865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.156 [2024-07-26 00:50:54.454877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.156 [2024-07-26 00:50:54.454892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.156 [2024-07-26 00:50:54.454905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.156 [2024-07-26 00:50:54.454920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.156 [2024-07-26 00:50:54.454933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.156 [2024-07-26 00:50:54.454948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.156 [2024-07-26 00:50:54.454962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.156 [2024-07-26 00:50:54.454977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.156 [2024-07-26 00:50:54.454990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.156 [2024-07-26 00:50:54.455005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.156 [2024-07-26 00:50:54.455018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.156 [2024-07-26 00:50:54.455034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.156 [2024-07-26 00:50:54.455072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.156 [2024-07-26 00:50:54.455090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.156 [2024-07-26 00:50:54.455105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.156 [2024-07-26 00:50:54.455120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.156 [2024-07-26 00:50:54.455137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.156 [2024-07-26 00:50:54.455153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.156 [2024-07-26 00:50:54.455169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.156 [2024-07-26 00:50:54.455184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.156 [2024-07-26 00:50:54.455198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.156 [2024-07-26 00:50:54.455213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.156 [2024-07-26 00:50:54.455226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.156 [2024-07-26 00:50:54.455241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.156 [2024-07-26 00:50:54.455254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.156 [2024-07-26 00:50:54.455270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.156 [2024-07-26 00:50:54.455283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.156 [2024-07-26 00:50:54.455298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.156 [2024-07-26 00:50:54.455311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.156 [2024-07-26 00:50:54.455325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.156 [2024-07-26 00:50:54.455339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.156 [2024-07-26 00:50:54.455371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.156 [2024-07-26 00:50:54.455384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.156 [2024-07-26 00:50:54.455399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.156 [2024-07-26 00:50:54.455419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.156 [2024-07-26 00:50:54.455449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.156 [2024-07-26 00:50:54.455462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.156 [2024-07-26 00:50:54.455477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.156 [2024-07-26 00:50:54.455490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.156 [2024-07-26 00:50:54.455505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.156 [2024-07-26 00:50:54.455519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.156 [2024-07-26 00:50:54.455537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.156 [2024-07-26 00:50:54.455559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.156 [2024-07-26 00:50:54.455574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.156 [2024-07-26 00:50:54.455587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.156 [2024-07-26 00:50:54.455602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.156 [2024-07-26 00:50:54.455615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.157 [2024-07-26 00:50:54.455630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.157 [2024-07-26 00:50:54.455643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.157 [2024-07-26 00:50:54.455657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.157 [2024-07-26 00:50:54.455679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.157 [2024-07-26 00:50:54.455694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.157 [2024-07-26 00:50:54.455707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.157 [2024-07-26 00:50:54.455722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.157 [2024-07-26 00:50:54.455735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.157 [2024-07-26 00:50:54.455750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.157 [2024-07-26 00:50:54.455778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.157 [2024-07-26 00:50:54.455793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.157 [2024-07-26 00:50:54.455806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.157 [2024-07-26 00:50:54.455820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.157 [2024-07-26 00:50:54.455834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.157 [2024-07-26 00:50:54.455848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.157 [2024-07-26 00:50:54.455861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.157 [2024-07-26 00:50:54.455875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.157 [2024-07-26 00:50:54.455888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.157 [2024-07-26 00:50:54.455902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.157 [2024-07-26 00:50:54.455918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.157 [2024-07-26 00:50:54.455933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.157 [2024-07-26 00:50:54.455946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.157 [2024-07-26 00:50:54.455961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.157 [2024-07-26 00:50:54.455978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.157 [2024-07-26 00:50:54.455992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.157 [2024-07-26 00:50:54.456005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.157 [2024-07-26 00:50:54.456019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.157 [2024-07-26 00:50:54.456032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.157 [2024-07-26 00:50:54.456068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.157 [2024-07-26 00:50:54.456083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.157 [2024-07-26 00:50:54.456108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.157 [2024-07-26 00:50:54.456121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.157 [2024-07-26 00:50:54.456136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.157 [2024-07-26 00:50:54.456149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.157 [2024-07-26 00:50:54.456164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.157 [2024-07-26 00:50:54.456179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.157 [2024-07-26 00:50:54.456258] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11ae420 was disconnected and freed. reset controller. 00:08:24.157 [2024-07-26 00:50:54.457403] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:24.157 task offset: 81920 on job bdev=Nvme0n1 fails 00:08:24.157 00:08:24.157 Latency(us) 00:08:24.157 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:24.157 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:24.157 Job: Nvme0n1 ended in about 0.41 seconds with error 00:08:24.157 Verification LBA range: start 0x0 length 0x400 00:08:24.157 Nvme0n1 : 0.41 1548.20 96.76 154.82 0.00 36525.07 2524.35 34175.81 00:08:24.157 =================================================================================================================== 00:08:24.157 Total : 1548.20 96.76 154.82 0.00 36525.07 2524.35 34175.81 00:08:24.157 [2024-07-26 00:50:54.459376] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:24.157 [2024-07-26 00:50:54.463919] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:25.089 00:50:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1722310 00:08:25.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1722310) - No such process 00:08:25.089 00:50:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:25.089 00:50:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:25.089 00:50:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:25.089 00:50:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:25.089 00:50:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:25.089 00:50:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:25.089 00:50:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:25.089 00:50:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:25.089 { 00:08:25.089 "params": { 00:08:25.089 "name": "Nvme$subsystem", 00:08:25.089 "trtype": "$TEST_TRANSPORT", 00:08:25.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:25.089 "adrfam": "ipv4", 00:08:25.090 "trsvcid": "$NVMF_PORT", 00:08:25.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:25.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:25.090 "hdgst": ${hdgst:-false}, 00:08:25.090 "ddgst": ${ddgst:-false} 00:08:25.090 }, 00:08:25.090 "method": "bdev_nvme_attach_controller" 00:08:25.090 } 00:08:25.090 EOF 00:08:25.090 )") 00:08:25.090 00:50:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:25.090 00:50:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:25.090 00:50:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:25.090 00:50:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:25.090 "params": { 00:08:25.090 "name": "Nvme0", 00:08:25.090 "trtype": "tcp", 00:08:25.090 "traddr": "10.0.0.2", 00:08:25.090 "adrfam": "ipv4", 00:08:25.090 "trsvcid": "4420", 00:08:25.090 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:25.090 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:25.090 "hdgst": false, 00:08:25.090 "ddgst": false 00:08:25.090 }, 00:08:25.090 "method": "bdev_nvme_attach_controller" 00:08:25.090 }' 00:08:25.090 [2024-07-26 00:50:55.502038] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:08:25.090 [2024-07-26 00:50:55.502136] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1722475 ] 00:08:25.349 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.349 [2024-07-26 00:50:55.562697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.349 [2024-07-26 00:50:55.650335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.607 Running I/O for 1 seconds... 00:08:26.987 00:08:26.987 Latency(us) 00:08:26.987 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:26.987 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:26.987 Verification LBA range: start 0x0 length 0x400 00:08:26.987 Nvme0n1 : 1.05 1583.68 98.98 0.00 0.00 38297.59 9077.95 49127.73 00:08:26.987 =================================================================================================================== 00:08:26.987 Total : 1583.68 98.98 0.00 0.00 38297.59 9077.95 49127.73 00:08:26.987 00:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:26.987 00:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:26.988 00:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:26.988 00:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:26.988 00:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:26.988 00:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:26.988 00:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:08:26.988 00:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:26.988 00:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:08:26.988 00:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:26.988 00:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:26.988 rmmod nvme_tcp 00:08:26.988 rmmod nvme_fabrics 00:08:26.988 rmmod nvme_keyring 00:08:26.988 00:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:26.988 00:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:08:26.988 00:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:08:26.988 00:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1722150 ']' 00:08:26.988 00:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1722150 00:08:26.988 00:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1722150 ']' 00:08:26.988 00:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1722150 00:08:26.988 00:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:08:26.988 00:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:26.988 00:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1722150 00:08:26.988 00:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:26.988 00:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:26.988 00:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1722150' 00:08:26.988 killing process with pid 1722150 00:08:26.988 00:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1722150 00:08:26.988 00:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1722150 00:08:27.246 [2024-07-26 00:50:57.537057] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:27.246 00:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:27.246 00:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:27.246 00:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:27.246 00:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:27.246 00:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:27.246 00:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.246 00:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:27.246 00:50:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.777 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:29.777 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:29.777 00:08:29.777 real 0m8.768s 00:08:29.777 user 0m20.077s 00:08:29.777 sys 0m2.604s 00:08:29.777 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:29.777 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:29.777 ************************************ 00:08:29.777 END TEST nvmf_host_management 00:08:29.777 ************************************ 00:08:29.777 00:50:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:29.777 00:50:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:29.777 00:50:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:29.777 00:50:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:29.777 ************************************ 00:08:29.777 START TEST nvmf_lvol 00:08:29.777 ************************************ 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:29.778 * Looking for test storage... 00:08:29.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:08:29.778 00:50:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:31.186 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:31.186 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:08:31.186 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:31.186 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:31.186 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:31.186 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:31.186 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:31.186 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:08:31.186 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:31.186 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:08:31.186 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:08:31.186 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:08:31.186 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:08:31.186 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:08:31.186 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:08:31.186 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:31.186 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:31.186 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:31.186 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:31.186 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:31.186 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:31.186 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:31.186 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:31.187 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:31.187 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:31.187 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:31.187 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:31.187 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:31.187 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:31.187 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:31.187 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:31.187 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:31.187 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:31.187 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:31.187 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:31.187 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:31.187 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:31.187 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.187 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.187 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:31.187 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:31.187 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:31.187 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:31.187 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:31.187 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:31.187 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.187 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.187 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:31.187 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:31.187 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:31.187 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:31.187 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:31.187 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.187 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:31.187 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.187 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:31.187 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:31.187 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.187 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:31.187 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:31.187 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.187 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:31.187 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.187 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:31.187 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.187 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:31.187 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:31.187 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.445 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:31.445 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:31.445 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.445 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:31.445 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:08:31.445 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:31.445 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:31.445 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:31.445 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.446 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:31.446 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:31.446 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:31.446 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:31.446 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:31.446 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:31.446 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:31.446 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.446 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:31.446 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:31.446 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:31.446 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:31.446 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:31.446 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:31.446 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:31.446 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:31.446 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:31.446 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:31.446 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:31.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:08:31.446 00:08:31.446 --- 10.0.0.2 ping statistics --- 00:08:31.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.446 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:08:31.446 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:31.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:08:31.446 00:08:31.446 --- 10.0.0.1 ping statistics --- 00:08:31.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.446 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:08:31.446 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.446 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:08:31.446 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:31.446 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.446 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:31.446 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:31.446 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.446 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:31.446 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:31.446 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:31.446 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:31.446 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:31.446 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:31.446 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1724774 00:08:31.446 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:31.446 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1724774 00:08:31.446 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1724774 ']' 00:08:31.446 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.446 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:31.446 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.446 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:31.446 00:51:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:31.446 [2024-07-26 00:51:01.813722] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:08:31.446 [2024-07-26 00:51:01.813814] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.446 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.704 [2024-07-26 00:51:01.883889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:31.704 [2024-07-26 00:51:01.969130] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.704 [2024-07-26 00:51:01.969183] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.704 [2024-07-26 00:51:01.969197] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:31.704 [2024-07-26 00:51:01.969209] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:31.704 [2024-07-26 00:51:01.969218] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.704 [2024-07-26 00:51:01.969277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.704 [2024-07-26 00:51:01.969340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.704 [2024-07-26 00:51:01.969343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.704 00:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:31.704 00:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:08:31.704 00:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:31.704 00:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:31.704 00:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:31.704 00:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:31.704 00:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:31.962 [2024-07-26 00:51:02.337447] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:31.962 00:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:32.531 00:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:32.531 00:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:32.531 00:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:32.531 00:51:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:32.789 00:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:33.047 00:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=29cbd73c-4501-4f04-8d1c-7d7909cc89a1 00:08:33.047 00:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 29cbd73c-4501-4f04-8d1c-7d7909cc89a1 lvol 20 00:08:33.304 00:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=1dc4a9fa-fa2e-47f7-9e54-e21c3cde06ba 00:08:33.304 00:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:33.561 00:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1dc4a9fa-fa2e-47f7-9e54-e21c3cde06ba 00:08:33.819 00:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:34.076 [2024-07-26 00:51:04.406526] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:34.076 00:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:34.335 00:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1725177 00:08:34.335 00:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:34.335 00:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:34.335 EAL: No free 2048 kB hugepages reported on node 1 00:08:35.272 00:51:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 1dc4a9fa-fa2e-47f7-9e54-e21c3cde06ba MY_SNAPSHOT 00:08:35.841 00:51:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=88a613b9-9a84-46c4-9ada-c56b6f372aee 00:08:35.841 00:51:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 1dc4a9fa-fa2e-47f7-9e54-e21c3cde06ba 30 00:08:36.099 00:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 88a613b9-9a84-46c4-9ada-c56b6f372aee MY_CLONE 00:08:36.357 00:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a16f0d1f-1421-448d-8e3e-bd80fc1e45ea 00:08:36.357 00:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate a16f0d1f-1421-448d-8e3e-bd80fc1e45ea 00:08:36.924 00:51:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1725177 00:08:45.042 Initializing NVMe Controllers 00:08:45.042 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:45.042 Controller IO queue size 128, less than required. 00:08:45.042 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:45.042 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:45.042 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:45.042 Initialization complete. Launching workers. 00:08:45.042 ======================================================== 00:08:45.042 Latency(us) 00:08:45.042 Device Information : IOPS MiB/s Average min max 00:08:45.042 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10605.70 41.43 12075.07 2428.32 79635.39 00:08:45.042 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10667.70 41.67 12007.89 2290.53 60916.79 00:08:45.042 ======================================================== 00:08:45.042 Total : 21273.40 83.10 12041.38 2290.53 79635.39 00:08:45.042 00:08:45.042 00:51:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:45.299 00:51:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1dc4a9fa-fa2e-47f7-9e54-e21c3cde06ba 00:08:45.299 00:51:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 29cbd73c-4501-4f04-8d1c-7d7909cc89a1 00:08:45.865 00:51:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:45.866 00:51:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:45.866 00:51:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:45.866 00:51:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:45.866 00:51:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:08:45.866 00:51:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:45.866 00:51:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:08:45.866 00:51:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:45.866 00:51:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:45.866 rmmod nvme_tcp 00:08:45.866 rmmod nvme_fabrics 00:08:45.866 rmmod nvme_keyring 00:08:45.866 00:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:45.866 00:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:08:45.866 00:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:08:45.866 00:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1724774 ']' 00:08:45.866 00:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1724774 00:08:45.866 00:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1724774 ']' 00:08:45.866 00:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1724774 00:08:45.866 00:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:08:45.866 00:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:45.866 00:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1724774 00:08:45.866 00:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:45.866 00:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:45.866 00:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1724774' 00:08:45.866 killing process with pid 1724774 00:08:45.866 00:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1724774 00:08:45.866 00:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1724774 00:08:46.126 00:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:46.126 00:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:46.126 00:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:46.126 00:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:46.126 00:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:46.126 00:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.126 00:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:46.126 00:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.032 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:48.032 00:08:48.032 real 0m18.740s 00:08:48.032 user 1m2.841s 00:08:48.032 sys 0m6.189s 00:08:48.032 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:48.032 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:48.032 ************************************ 00:08:48.032 END TEST nvmf_lvol 00:08:48.032 ************************************ 00:08:48.032 00:51:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:48.032 00:51:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:48.032 00:51:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:48.032 00:51:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:48.032 ************************************ 00:08:48.032 START TEST nvmf_lvs_grow 00:08:48.032 ************************************ 00:08:48.032 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:48.291 * Looking for test storage... 00:08:48.291 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:48.291 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:48.291 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:48.291 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:48.291 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:48.291 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:48.291 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:48.291 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:48.291 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:48.291 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:48.291 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:48.291 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:48.291 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:48.291 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:48.292 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:48.292 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:48.292 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:48.292 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:48.292 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:48.292 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:48.292 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:48.292 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.292 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.292 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.292 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.292 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.292 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:48.292 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.292 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:08:48.292 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:48.292 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:48.292 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:48.292 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:48.292 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:48.292 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:48.292 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:48.292 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:48.292 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:48.292 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:48.292 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:48.292 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:48.292 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:48.292 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:48.292 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:48.292 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:48.292 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.292 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.292 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.292 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:48.292 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:48.292 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:08:48.292 00:51:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:50.197 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:50.198 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:50.198 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:50.198 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:50.198 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:50.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:50.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:08:50.198 00:08:50.198 --- 10.0.0.2 ping statistics --- 00:08:50.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.198 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:50.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:50.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:08:50.198 00:08:50.198 --- 10.0.0.1 ping statistics --- 00:08:50.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.198 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:50.198 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:50.199 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:50.199 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:50.199 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:50.199 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:50.457 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:50.457 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:50.457 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:50.457 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:50.457 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1728985 00:08:50.457 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:50.457 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1728985 00:08:50.457 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1728985 ']' 00:08:50.457 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.457 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:50.457 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.457 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:50.457 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:50.457 [2024-07-26 00:51:20.683699] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:08:50.457 [2024-07-26 00:51:20.683795] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:50.457 EAL: No free 2048 kB hugepages reported on node 1 00:08:50.457 [2024-07-26 00:51:20.750523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.457 [2024-07-26 00:51:20.835619] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:50.457 [2024-07-26 00:51:20.835686] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:50.457 [2024-07-26 00:51:20.835714] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:50.457 [2024-07-26 00:51:20.835726] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:50.457 [2024-07-26 00:51:20.835735] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:50.457 [2024-07-26 00:51:20.835766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.715 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:50.715 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:50.715 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:50.715 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:50.715 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:50.715 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:50.715 00:51:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:50.973 [2024-07-26 00:51:21.190276] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:50.973 00:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:50.973 00:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:50.973 00:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:50.973 00:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:50.973 ************************************ 00:08:50.973 START TEST lvs_grow_clean 00:08:50.973 ************************************ 00:08:50.973 00:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:50.973 00:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:50.973 00:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:50.973 00:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:50.973 00:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:50.973 00:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:50.973 00:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:50.973 00:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:50.973 00:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:50.973 00:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:51.232 00:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:51.232 00:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:51.492 00:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b57ec0ff-b118-4f14-b958-0185487e9b74 00:08:51.492 00:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b57ec0ff-b118-4f14-b958-0185487e9b74 00:08:51.492 00:51:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:51.750 00:51:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:51.750 00:51:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:51.750 00:51:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b57ec0ff-b118-4f14-b958-0185487e9b74 lvol 150 00:08:52.009 00:51:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=9d64c739-708a-43d7-9dd4-1b14248c82e3 00:08:52.009 00:51:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:52.009 00:51:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:52.293 [2024-07-26 00:51:22.564358] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:52.293 [2024-07-26 00:51:22.564473] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:52.293 true 00:08:52.293 00:51:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b57ec0ff-b118-4f14-b958-0185487e9b74 00:08:52.293 00:51:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:52.554 00:51:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:52.554 00:51:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:52.814 00:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9d64c739-708a-43d7-9dd4-1b14248c82e3 00:08:53.072 00:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:53.330 [2024-07-26 00:51:23.571466] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:53.330 00:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:53.588 00:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1729393 00:08:53.588 00:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:53.588 00:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:53.588 00:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1729393 /var/tmp/bdevperf.sock 00:08:53.588 00:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1729393 ']' 00:08:53.588 00:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:53.588 00:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:53.588 00:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:53.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:53.588 00:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:53.588 00:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:53.588 [2024-07-26 00:51:23.875778] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:08:53.588 [2024-07-26 00:51:23.875866] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1729393 ] 00:08:53.588 EAL: No free 2048 kB hugepages reported on node 1 00:08:53.588 [2024-07-26 00:51:23.935450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.845 [2024-07-26 00:51:24.026821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.845 00:51:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:53.845 00:51:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:53.845 00:51:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:54.412 Nvme0n1 00:08:54.412 00:51:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:54.412 [ 00:08:54.412 { 00:08:54.412 "name": "Nvme0n1", 00:08:54.412 "aliases": [ 00:08:54.412 "9d64c739-708a-43d7-9dd4-1b14248c82e3" 00:08:54.412 ], 00:08:54.412 "product_name": "NVMe disk", 00:08:54.412 "block_size": 4096, 00:08:54.412 "num_blocks": 38912, 00:08:54.412 "uuid": "9d64c739-708a-43d7-9dd4-1b14248c82e3", 00:08:54.412 "assigned_rate_limits": { 00:08:54.412 "rw_ios_per_sec": 0, 00:08:54.412 "rw_mbytes_per_sec": 0, 00:08:54.412 "r_mbytes_per_sec": 0, 00:08:54.412 "w_mbytes_per_sec": 0 00:08:54.412 }, 00:08:54.412 "claimed": false, 00:08:54.412 "zoned": false, 00:08:54.412 "supported_io_types": { 00:08:54.412 "read": true, 00:08:54.412 "write": true, 00:08:54.412 "unmap": true, 00:08:54.412 "flush": true, 00:08:54.412 "reset": true, 00:08:54.412 "nvme_admin": true, 00:08:54.412 "nvme_io": true, 00:08:54.413 "nvme_io_md": false, 00:08:54.413 "write_zeroes": true, 00:08:54.413 "zcopy": false, 00:08:54.413 "get_zone_info": false, 00:08:54.413 "zone_management": false, 00:08:54.413 "zone_append": false, 00:08:54.413 "compare": true, 00:08:54.413 "compare_and_write": true, 00:08:54.413 "abort": true, 00:08:54.413 "seek_hole": false, 00:08:54.413 "seek_data": false, 00:08:54.413 "copy": true, 00:08:54.413 "nvme_iov_md": false 00:08:54.413 }, 00:08:54.413 "memory_domains": [ 00:08:54.413 { 00:08:54.413 "dma_device_id": "system", 00:08:54.413 "dma_device_type": 1 00:08:54.413 } 00:08:54.413 ], 00:08:54.413 "driver_specific": { 00:08:54.413 "nvme": [ 00:08:54.413 { 00:08:54.413 "trid": { 00:08:54.413 "trtype": "TCP", 00:08:54.413 "adrfam": "IPv4", 00:08:54.413 "traddr": "10.0.0.2", 00:08:54.413 "trsvcid": "4420", 00:08:54.413 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:54.413 }, 00:08:54.413 "ctrlr_data": { 00:08:54.413 "cntlid": 1, 00:08:54.413 "vendor_id": "0x8086", 00:08:54.413 "model_number": "SPDK bdev Controller", 00:08:54.413 "serial_number": "SPDK0", 00:08:54.413 "firmware_revision": "24.09", 00:08:54.413 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:54.413 "oacs": { 00:08:54.413 "security": 0, 00:08:54.413 "format": 0, 00:08:54.413 "firmware": 0, 00:08:54.413 "ns_manage": 0 00:08:54.413 }, 00:08:54.413 "multi_ctrlr": true, 00:08:54.413 "ana_reporting": false 00:08:54.413 }, 00:08:54.413 "vs": { 00:08:54.413 "nvme_version": "1.3" 00:08:54.413 }, 00:08:54.413 "ns_data": { 00:08:54.413 "id": 1, 00:08:54.413 "can_share": true 00:08:54.413 } 00:08:54.413 } 00:08:54.413 ], 00:08:54.413 "mp_policy": "active_passive" 00:08:54.413 } 00:08:54.413 } 00:08:54.413 ] 00:08:54.413 00:51:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1729449 00:08:54.413 00:51:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:54.413 00:51:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:54.670 Running I/O for 10 seconds... 00:08:55.604 Latency(us) 00:08:55.604 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.604 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.604 Nvme0n1 : 1.00 14479.00 56.56 0.00 0.00 0.00 0.00 0.00 00:08:55.604 =================================================================================================================== 00:08:55.604 Total : 14479.00 56.56 0.00 0.00 0.00 0.00 0.00 00:08:55.604 00:08:56.539 00:51:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b57ec0ff-b118-4f14-b958-0185487e9b74 00:08:56.539 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.539 Nvme0n1 : 2.00 14638.00 57.18 0.00 0.00 0.00 0.00 0.00 00:08:56.539 =================================================================================================================== 00:08:56.539 Total : 14638.00 57.18 0.00 0.00 0.00 0.00 0.00 00:08:56.539 00:08:56.798 true 00:08:56.798 00:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b57ec0ff-b118-4f14-b958-0185487e9b74 00:08:56.798 00:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:57.057 00:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:57.057 00:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:57.057 00:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1729449 00:08:57.624 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.624 Nvme0n1 : 3.00 14669.33 57.30 0.00 0.00 0.00 0.00 0.00 00:08:57.624 =================================================================================================================== 00:08:57.624 Total : 14669.33 57.30 0.00 0.00 0.00 0.00 0.00 00:08:57.624 00:08:58.561 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.561 Nvme0n1 : 4.00 14780.25 57.74 0.00 0.00 0.00 0.00 0.00 00:08:58.561 =================================================================================================================== 00:08:58.561 Total : 14780.25 57.74 0.00 0.00 0.00 0.00 0.00 00:08:58.561 00:08:59.943 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.943 Nvme0n1 : 5.00 14861.00 58.05 0.00 0.00 0.00 0.00 0.00 00:08:59.943 =================================================================================================================== 00:08:59.943 Total : 14861.00 58.05 0.00 0.00 0.00 0.00 0.00 00:08:59.943 00:09:00.882 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.882 Nvme0n1 : 6.00 14907.50 58.23 0.00 0.00 0.00 0.00 0.00 00:09:00.882 =================================================================================================================== 00:09:00.882 Total : 14907.50 58.23 0.00 0.00 0.00 0.00 0.00 00:09:00.882 00:09:01.822 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.822 Nvme0n1 : 7.00 14946.14 58.38 0.00 0.00 0.00 0.00 0.00 00:09:01.822 =================================================================================================================== 00:09:01.822 Total : 14946.14 58.38 0.00 0.00 0.00 0.00 0.00 00:09:01.822 00:09:02.761 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.761 Nvme0n1 : 8.00 14967.00 58.46 0.00 0.00 0.00 0.00 0.00 00:09:02.761 =================================================================================================================== 00:09:02.761 Total : 14967.00 58.46 0.00 0.00 0.00 0.00 0.00 00:09:02.761 00:09:03.696 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.696 Nvme0n1 : 9.00 14983.22 58.53 0.00 0.00 0.00 0.00 0.00 00:09:03.696 =================================================================================================================== 00:09:03.696 Total : 14983.22 58.53 0.00 0.00 0.00 0.00 0.00 00:09:03.696 00:09:04.634 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.634 Nvme0n1 : 10.00 14996.40 58.58 0.00 0.00 0.00 0.00 0.00 00:09:04.634 =================================================================================================================== 00:09:04.634 Total : 14996.40 58.58 0.00 0.00 0.00 0.00 0.00 00:09:04.634 00:09:04.634 00:09:04.634 Latency(us) 00:09:04.634 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:04.634 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.634 Nvme0n1 : 10.01 15000.19 58.59 0.00 0.00 8527.83 4247.70 16893.72 00:09:04.634 =================================================================================================================== 00:09:04.634 Total : 15000.19 58.59 0.00 0.00 8527.83 4247.70 16893.72 00:09:04.634 0 00:09:04.634 00:51:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1729393 00:09:04.634 00:51:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1729393 ']' 00:09:04.634 00:51:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1729393 00:09:04.634 00:51:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:09:04.634 00:51:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:04.634 00:51:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1729393 00:09:04.634 00:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:04.634 00:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:04.634 00:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1729393' 00:09:04.634 killing process with pid 1729393 00:09:04.634 00:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1729393 00:09:04.634 Received shutdown signal, test time was about 10.000000 seconds 00:09:04.634 00:09:04.634 Latency(us) 00:09:04.634 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:04.634 =================================================================================================================== 00:09:04.634 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:04.634 00:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1729393 00:09:04.892 00:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:05.150 00:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:05.408 00:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b57ec0ff-b118-4f14-b958-0185487e9b74 00:09:05.408 00:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:05.668 00:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:05.668 00:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:05.668 00:51:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:05.927 [2024-07-26 00:51:36.208849] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:05.927 00:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b57ec0ff-b118-4f14-b958-0185487e9b74 00:09:05.927 00:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:09:05.927 00:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b57ec0ff-b118-4f14-b958-0185487e9b74 00:09:05.927 00:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:05.927 00:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:05.927 00:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:05.927 00:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:05.927 00:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:05.927 00:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:05.927 00:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:05.927 00:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:05.927 00:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b57ec0ff-b118-4f14-b958-0185487e9b74 00:09:06.185 request: 00:09:06.185 { 00:09:06.185 "uuid": "b57ec0ff-b118-4f14-b958-0185487e9b74", 00:09:06.185 "method": "bdev_lvol_get_lvstores", 00:09:06.185 "req_id": 1 00:09:06.185 } 00:09:06.185 Got JSON-RPC error response 00:09:06.185 response: 00:09:06.185 { 00:09:06.185 "code": -19, 00:09:06.185 "message": "No such device" 00:09:06.185 } 00:09:06.185 00:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:09:06.185 00:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:06.185 00:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:06.185 00:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:06.185 00:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:06.443 aio_bdev 00:09:06.443 00:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9d64c739-708a-43d7-9dd4-1b14248c82e3 00:09:06.443 00:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=9d64c739-708a-43d7-9dd4-1b14248c82e3 00:09:06.443 00:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:06.443 00:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:09:06.443 00:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:06.443 00:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:06.443 00:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:06.702 00:51:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9d64c739-708a-43d7-9dd4-1b14248c82e3 -t 2000 00:09:06.959 [ 00:09:06.959 { 00:09:06.959 "name": "9d64c739-708a-43d7-9dd4-1b14248c82e3", 00:09:06.959 "aliases": [ 00:09:06.959 "lvs/lvol" 00:09:06.959 ], 00:09:06.959 "product_name": "Logical Volume", 00:09:06.959 "block_size": 4096, 00:09:06.959 "num_blocks": 38912, 00:09:06.959 "uuid": "9d64c739-708a-43d7-9dd4-1b14248c82e3", 00:09:06.959 "assigned_rate_limits": { 00:09:06.959 "rw_ios_per_sec": 0, 00:09:06.959 "rw_mbytes_per_sec": 0, 00:09:06.959 "r_mbytes_per_sec": 0, 00:09:06.959 "w_mbytes_per_sec": 0 00:09:06.959 }, 00:09:06.959 "claimed": false, 00:09:06.959 "zoned": false, 00:09:06.959 "supported_io_types": { 00:09:06.959 "read": true, 00:09:06.959 "write": true, 00:09:06.959 "unmap": true, 00:09:06.959 "flush": false, 00:09:06.959 "reset": true, 00:09:06.959 "nvme_admin": false, 00:09:06.959 "nvme_io": false, 00:09:06.959 "nvme_io_md": false, 00:09:06.959 "write_zeroes": true, 00:09:06.959 "zcopy": false, 00:09:06.959 "get_zone_info": false, 00:09:06.959 "zone_management": false, 00:09:06.959 "zone_append": false, 00:09:06.959 "compare": false, 00:09:06.959 "compare_and_write": false, 00:09:06.959 "abort": false, 00:09:06.959 "seek_hole": true, 00:09:06.959 "seek_data": true, 00:09:06.959 "copy": false, 00:09:06.959 "nvme_iov_md": false 00:09:06.959 }, 00:09:06.959 "driver_specific": { 00:09:06.959 "lvol": { 00:09:06.959 "lvol_store_uuid": "b57ec0ff-b118-4f14-b958-0185487e9b74", 00:09:06.959 "base_bdev": "aio_bdev", 00:09:06.959 "thin_provision": false, 00:09:06.959 "num_allocated_clusters": 38, 00:09:06.959 "snapshot": false, 00:09:06.959 "clone": false, 00:09:06.959 "esnap_clone": false 00:09:06.959 } 00:09:06.959 } 00:09:06.959 } 00:09:06.959 ] 00:09:06.959 00:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:09:06.959 00:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b57ec0ff-b118-4f14-b958-0185487e9b74 00:09:06.959 00:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:07.217 00:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:07.217 00:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b57ec0ff-b118-4f14-b958-0185487e9b74 00:09:07.217 00:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:07.476 00:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:07.476 00:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9d64c739-708a-43d7-9dd4-1b14248c82e3 00:09:07.736 00:51:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b57ec0ff-b118-4f14-b958-0185487e9b74 00:09:07.996 00:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:08.256 00:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:08.256 00:09:08.256 real 0m17.260s 00:09:08.256 user 0m16.508s 00:09:08.256 sys 0m1.966s 00:09:08.256 00:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:08.256 00:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:08.256 ************************************ 00:09:08.256 END TEST lvs_grow_clean 00:09:08.256 ************************************ 00:09:08.256 00:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:08.256 00:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:08.256 00:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:08.256 00:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:08.256 ************************************ 00:09:08.256 START TEST lvs_grow_dirty 00:09:08.256 ************************************ 00:09:08.256 00:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:09:08.256 00:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:08.256 00:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:08.256 00:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:08.256 00:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:08.256 00:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:08.256 00:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:08.256 00:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:08.256 00:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:08.256 00:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:08.513 00:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:08.513 00:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:08.772 00:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=da9882ad-21fe-4aab-ad92-3f5d44cad383 00:09:08.772 00:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da9882ad-21fe-4aab-ad92-3f5d44cad383 00:09:08.772 00:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:09.050 00:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:09.050 00:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:09.050 00:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u da9882ad-21fe-4aab-ad92-3f5d44cad383 lvol 150 00:09:09.328 00:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=afa5338e-d00f-42dd-a650-3cd83845b3bf 00:09:09.328 00:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:09.328 00:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:09.586 [2024-07-26 00:51:39.925489] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:09.586 [2024-07-26 00:51:39.925587] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:09.586 true 00:09:09.586 00:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da9882ad-21fe-4aab-ad92-3f5d44cad383 00:09:09.586 00:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:09.846 00:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:09.846 00:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:10.104 00:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 afa5338e-d00f-42dd-a650-3cd83845b3bf 00:09:10.362 00:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:10.621 [2024-07-26 00:51:40.964782] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:10.621 00:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:10.880 00:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1731500 00:09:10.880 00:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:10.880 00:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1731500 /var/tmp/bdevperf.sock 00:09:10.880 00:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:10.880 00:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1731500 ']' 00:09:10.880 00:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:10.880 00:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:10.880 00:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:10.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:10.880 00:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:10.880 00:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:10.880 [2024-07-26 00:51:41.261990] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:09:10.880 [2024-07-26 00:51:41.262088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1731500 ] 00:09:10.880 EAL: No free 2048 kB hugepages reported on node 1 00:09:11.138 [2024-07-26 00:51:41.320275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.138 [2024-07-26 00:51:41.405081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:11.138 00:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:11.138 00:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:11.138 00:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:11.397 Nvme0n1 00:09:11.655 00:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:11.914 [ 00:09:11.914 { 00:09:11.914 "name": "Nvme0n1", 00:09:11.914 "aliases": [ 00:09:11.914 "afa5338e-d00f-42dd-a650-3cd83845b3bf" 00:09:11.914 ], 00:09:11.914 "product_name": "NVMe disk", 00:09:11.914 "block_size": 4096, 00:09:11.914 "num_blocks": 38912, 00:09:11.914 "uuid": "afa5338e-d00f-42dd-a650-3cd83845b3bf", 00:09:11.914 "assigned_rate_limits": { 00:09:11.914 "rw_ios_per_sec": 0, 00:09:11.914 "rw_mbytes_per_sec": 0, 00:09:11.914 "r_mbytes_per_sec": 0, 00:09:11.914 "w_mbytes_per_sec": 0 00:09:11.914 }, 00:09:11.914 "claimed": false, 00:09:11.914 "zoned": false, 00:09:11.914 "supported_io_types": { 00:09:11.914 "read": true, 00:09:11.914 "write": true, 00:09:11.914 "unmap": true, 00:09:11.914 "flush": true, 00:09:11.915 "reset": true, 00:09:11.915 "nvme_admin": true, 00:09:11.915 "nvme_io": true, 00:09:11.915 "nvme_io_md": false, 00:09:11.915 "write_zeroes": true, 00:09:11.915 "zcopy": false, 00:09:11.915 "get_zone_info": false, 00:09:11.915 "zone_management": false, 00:09:11.915 "zone_append": false, 00:09:11.915 "compare": true, 00:09:11.915 "compare_and_write": true, 00:09:11.915 "abort": true, 00:09:11.915 "seek_hole": false, 00:09:11.915 "seek_data": false, 00:09:11.915 "copy": true, 00:09:11.915 "nvme_iov_md": false 00:09:11.915 }, 00:09:11.915 "memory_domains": [ 00:09:11.915 { 00:09:11.915 "dma_device_id": "system", 00:09:11.915 "dma_device_type": 1 00:09:11.915 } 00:09:11.915 ], 00:09:11.915 "driver_specific": { 00:09:11.915 "nvme": [ 00:09:11.915 { 00:09:11.915 "trid": { 00:09:11.915 "trtype": "TCP", 00:09:11.915 "adrfam": "IPv4", 00:09:11.915 "traddr": "10.0.0.2", 00:09:11.915 "trsvcid": "4420", 00:09:11.915 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:11.915 }, 00:09:11.915 "ctrlr_data": { 00:09:11.915 "cntlid": 1, 00:09:11.915 "vendor_id": "0x8086", 00:09:11.915 "model_number": "SPDK bdev Controller", 00:09:11.915 "serial_number": "SPDK0", 00:09:11.915 "firmware_revision": "24.09", 00:09:11.915 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:11.915 "oacs": { 00:09:11.915 "security": 0, 00:09:11.915 "format": 0, 00:09:11.915 "firmware": 0, 00:09:11.915 "ns_manage": 0 00:09:11.915 }, 00:09:11.915 "multi_ctrlr": true, 00:09:11.915 "ana_reporting": false 00:09:11.915 }, 00:09:11.915 "vs": { 00:09:11.915 "nvme_version": "1.3" 00:09:11.915 }, 00:09:11.915 "ns_data": { 00:09:11.915 "id": 1, 00:09:11.915 "can_share": true 00:09:11.915 } 00:09:11.915 } 00:09:11.915 ], 00:09:11.915 "mp_policy": "active_passive" 00:09:11.915 } 00:09:11.915 } 00:09:11.915 ] 00:09:11.915 00:51:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1731626 00:09:11.915 00:51:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:11.915 00:51:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:11.915 Running I/O for 10 seconds... 00:09:12.859 Latency(us) 00:09:12.859 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.859 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.859 Nvme0n1 : 1.00 14163.00 55.32 0.00 0.00 0.00 0.00 0.00 00:09:12.859 =================================================================================================================== 00:09:12.859 Total : 14163.00 55.32 0.00 0.00 0.00 0.00 0.00 00:09:12.859 00:09:13.793 00:51:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u da9882ad-21fe-4aab-ad92-3f5d44cad383 00:09:14.050 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.050 Nvme0n1 : 2.00 14289.00 55.82 0.00 0.00 0.00 0.00 0.00 00:09:14.050 =================================================================================================================== 00:09:14.050 Total : 14289.00 55.82 0.00 0.00 0.00 0.00 0.00 00:09:14.050 00:09:14.050 true 00:09:14.050 00:51:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da9882ad-21fe-4aab-ad92-3f5d44cad383 00:09:14.050 00:51:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:14.309 00:51:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:14.309 00:51:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:14.309 00:51:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1731626 00:09:14.876 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.876 Nvme0n1 : 3.00 14396.67 56.24 0.00 0.00 0.00 0.00 0.00 00:09:14.876 =================================================================================================================== 00:09:14.876 Total : 14396.67 56.24 0.00 0.00 0.00 0.00 0.00 00:09:14.876 00:09:16.254 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:16.254 Nvme0n1 : 4.00 14433.25 56.38 0.00 0.00 0.00 0.00 0.00 00:09:16.254 =================================================================================================================== 00:09:16.254 Total : 14433.25 56.38 0.00 0.00 0.00 0.00 0.00 00:09:16.254 00:09:17.191 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:17.191 Nvme0n1 : 5.00 14467.60 56.51 0.00 0.00 0.00 0.00 0.00 00:09:17.191 =================================================================================================================== 00:09:17.191 Total : 14467.60 56.51 0.00 0.00 0.00 0.00 0.00 00:09:17.191 00:09:18.129 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.129 Nvme0n1 : 6.00 14490.50 56.60 0.00 0.00 0.00 0.00 0.00 00:09:18.129 =================================================================================================================== 00:09:18.129 Total : 14490.50 56.60 0.00 0.00 0.00 0.00 0.00 00:09:18.129 00:09:19.066 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.066 Nvme0n1 : 7.00 14507.86 56.67 0.00 0.00 0.00 0.00 0.00 00:09:19.066 =================================================================================================================== 00:09:19.066 Total : 14507.86 56.67 0.00 0.00 0.00 0.00 0.00 00:09:19.066 00:09:20.006 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.006 Nvme0n1 : 8.00 14536.25 56.78 0.00 0.00 0.00 0.00 0.00 00:09:20.006 =================================================================================================================== 00:09:20.006 Total : 14536.25 56.78 0.00 0.00 0.00 0.00 0.00 00:09:20.006 00:09:20.943 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.943 Nvme0n1 : 9.00 14551.11 56.84 0.00 0.00 0.00 0.00 0.00 00:09:20.943 =================================================================================================================== 00:09:20.943 Total : 14551.11 56.84 0.00 0.00 0.00 0.00 0.00 00:09:20.943 00:09:21.879 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.879 Nvme0n1 : 10.00 14563.00 56.89 0.00 0.00 0.00 0.00 0.00 00:09:21.879 =================================================================================================================== 00:09:21.879 Total : 14563.00 56.89 0.00 0.00 0.00 0.00 0.00 00:09:21.879 00:09:21.879 00:09:21.879 Latency(us) 00:09:21.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:21.879 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.880 Nvme0n1 : 10.01 14562.07 56.88 0.00 0.00 8784.88 2318.03 16990.81 00:09:21.880 =================================================================================================================== 00:09:21.880 Total : 14562.07 56.88 0.00 0.00 8784.88 2318.03 16990.81 00:09:21.880 0 00:09:21.880 00:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1731500 00:09:21.880 00:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1731500 ']' 00:09:21.880 00:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1731500 00:09:21.880 00:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:09:21.880 00:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:21.880 00:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1731500 00:09:22.137 00:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:22.137 00:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:22.137 00:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1731500' 00:09:22.137 killing process with pid 1731500 00:09:22.137 00:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1731500 00:09:22.137 Received shutdown signal, test time was about 10.000000 seconds 00:09:22.137 00:09:22.137 Latency(us) 00:09:22.137 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:22.137 =================================================================================================================== 00:09:22.137 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:22.137 00:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1731500 00:09:22.137 00:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:22.707 00:51:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:22.967 00:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da9882ad-21fe-4aab-ad92-3f5d44cad383 00:09:22.967 00:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:23.225 00:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:23.225 00:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:23.225 00:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1728985 00:09:23.225 00:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1728985 00:09:23.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1728985 Killed "${NVMF_APP[@]}" "$@" 00:09:23.225 00:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:23.225 00:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:23.225 00:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:23.225 00:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:23.225 00:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:23.225 00:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1732972 00:09:23.225 00:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:23.225 00:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1732972 00:09:23.225 00:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1732972 ']' 00:09:23.225 00:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.225 00:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:23.225 00:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.225 00:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:23.225 00:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:23.225 [2024-07-26 00:51:53.499876] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:09:23.225 [2024-07-26 00:51:53.499951] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:23.225 EAL: No free 2048 kB hugepages reported on node 1 00:09:23.225 [2024-07-26 00:51:53.573551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.483 [2024-07-26 00:51:53.663764] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:23.483 [2024-07-26 00:51:53.663824] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:23.483 [2024-07-26 00:51:53.663857] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:23.483 [2024-07-26 00:51:53.663872] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:23.483 [2024-07-26 00:51:53.663885] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:23.483 [2024-07-26 00:51:53.663914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.483 00:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:23.483 00:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:23.483 00:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:23.483 00:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:23.483 00:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:23.483 00:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:23.483 00:51:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:23.741 [2024-07-26 00:51:54.068087] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:23.741 [2024-07-26 00:51:54.068221] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:23.741 [2024-07-26 00:51:54.068273] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:23.741 00:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:23.742 00:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev afa5338e-d00f-42dd-a650-3cd83845b3bf 00:09:23.742 00:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=afa5338e-d00f-42dd-a650-3cd83845b3bf 00:09:23.742 00:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:23.742 00:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:23.742 00:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:23.742 00:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:23.742 00:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:23.999 00:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b afa5338e-d00f-42dd-a650-3cd83845b3bf -t 2000 00:09:24.258 [ 00:09:24.258 { 00:09:24.258 "name": "afa5338e-d00f-42dd-a650-3cd83845b3bf", 00:09:24.258 "aliases": [ 00:09:24.258 "lvs/lvol" 00:09:24.258 ], 00:09:24.258 "product_name": "Logical Volume", 00:09:24.258 "block_size": 4096, 00:09:24.258 "num_blocks": 38912, 00:09:24.258 "uuid": "afa5338e-d00f-42dd-a650-3cd83845b3bf", 00:09:24.258 "assigned_rate_limits": { 00:09:24.258 "rw_ios_per_sec": 0, 00:09:24.258 "rw_mbytes_per_sec": 0, 00:09:24.258 "r_mbytes_per_sec": 0, 00:09:24.258 "w_mbytes_per_sec": 0 00:09:24.258 }, 00:09:24.258 "claimed": false, 00:09:24.258 "zoned": false, 00:09:24.258 "supported_io_types": { 00:09:24.258 "read": true, 00:09:24.259 "write": true, 00:09:24.259 "unmap": true, 00:09:24.259 "flush": false, 00:09:24.259 "reset": true, 00:09:24.259 "nvme_admin": false, 00:09:24.259 "nvme_io": false, 00:09:24.259 "nvme_io_md": false, 00:09:24.259 "write_zeroes": true, 00:09:24.259 "zcopy": false, 00:09:24.259 "get_zone_info": false, 00:09:24.259 "zone_management": false, 00:09:24.259 "zone_append": false, 00:09:24.259 "compare": false, 00:09:24.259 "compare_and_write": false, 00:09:24.259 "abort": false, 00:09:24.259 "seek_hole": true, 00:09:24.259 "seek_data": true, 00:09:24.259 "copy": false, 00:09:24.259 "nvme_iov_md": false 00:09:24.259 }, 00:09:24.259 "driver_specific": { 00:09:24.259 "lvol": { 00:09:24.259 "lvol_store_uuid": "da9882ad-21fe-4aab-ad92-3f5d44cad383", 00:09:24.259 "base_bdev": "aio_bdev", 00:09:24.259 "thin_provision": false, 00:09:24.259 "num_allocated_clusters": 38, 00:09:24.259 "snapshot": false, 00:09:24.259 "clone": false, 00:09:24.259 "esnap_clone": false 00:09:24.259 } 00:09:24.259 } 00:09:24.259 } 00:09:24.259 ] 00:09:24.259 00:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:24.259 00:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da9882ad-21fe-4aab-ad92-3f5d44cad383 00:09:24.259 00:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:24.516 00:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:24.516 00:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da9882ad-21fe-4aab-ad92-3f5d44cad383 00:09:24.516 00:51:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:24.775 00:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:24.775 00:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:25.035 [2024-07-26 00:51:55.289463] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:25.035 00:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da9882ad-21fe-4aab-ad92-3f5d44cad383 00:09:25.035 00:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:25.035 00:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da9882ad-21fe-4aab-ad92-3f5d44cad383 00:09:25.035 00:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:25.035 00:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:25.035 00:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:25.035 00:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:25.035 00:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:25.035 00:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:25.035 00:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:25.035 00:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:25.035 00:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da9882ad-21fe-4aab-ad92-3f5d44cad383 00:09:25.293 request: 00:09:25.293 { 00:09:25.293 "uuid": "da9882ad-21fe-4aab-ad92-3f5d44cad383", 00:09:25.294 "method": "bdev_lvol_get_lvstores", 00:09:25.294 "req_id": 1 00:09:25.294 } 00:09:25.294 Got JSON-RPC error response 00:09:25.294 response: 00:09:25.294 { 00:09:25.294 "code": -19, 00:09:25.294 "message": "No such device" 00:09:25.294 } 00:09:25.294 00:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:25.294 00:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:25.294 00:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:25.294 00:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:25.294 00:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:25.553 aio_bdev 00:09:25.553 00:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev afa5338e-d00f-42dd-a650-3cd83845b3bf 00:09:25.553 00:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=afa5338e-d00f-42dd-a650-3cd83845b3bf 00:09:25.553 00:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:25.553 00:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:25.553 00:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:25.553 00:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:25.553 00:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:25.833 00:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b afa5338e-d00f-42dd-a650-3cd83845b3bf -t 2000 00:09:26.106 [ 00:09:26.106 { 00:09:26.106 "name": "afa5338e-d00f-42dd-a650-3cd83845b3bf", 00:09:26.106 "aliases": [ 00:09:26.106 "lvs/lvol" 00:09:26.106 ], 00:09:26.106 "product_name": "Logical Volume", 00:09:26.106 "block_size": 4096, 00:09:26.106 "num_blocks": 38912, 00:09:26.106 "uuid": "afa5338e-d00f-42dd-a650-3cd83845b3bf", 00:09:26.106 "assigned_rate_limits": { 00:09:26.106 "rw_ios_per_sec": 0, 00:09:26.106 "rw_mbytes_per_sec": 0, 00:09:26.106 "r_mbytes_per_sec": 0, 00:09:26.106 "w_mbytes_per_sec": 0 00:09:26.106 }, 00:09:26.106 "claimed": false, 00:09:26.106 "zoned": false, 00:09:26.106 "supported_io_types": { 00:09:26.106 "read": true, 00:09:26.106 "write": true, 00:09:26.106 "unmap": true, 00:09:26.106 "flush": false, 00:09:26.106 "reset": true, 00:09:26.106 "nvme_admin": false, 00:09:26.106 "nvme_io": false, 00:09:26.106 "nvme_io_md": false, 00:09:26.106 "write_zeroes": true, 00:09:26.106 "zcopy": false, 00:09:26.106 "get_zone_info": false, 00:09:26.106 "zone_management": false, 00:09:26.106 "zone_append": false, 00:09:26.106 "compare": false, 00:09:26.106 "compare_and_write": false, 00:09:26.106 "abort": false, 00:09:26.106 "seek_hole": true, 00:09:26.106 "seek_data": true, 00:09:26.106 "copy": false, 00:09:26.106 "nvme_iov_md": false 00:09:26.106 }, 00:09:26.106 "driver_specific": { 00:09:26.106 "lvol": { 00:09:26.106 "lvol_store_uuid": "da9882ad-21fe-4aab-ad92-3f5d44cad383", 00:09:26.106 "base_bdev": "aio_bdev", 00:09:26.106 "thin_provision": false, 00:09:26.106 "num_allocated_clusters": 38, 00:09:26.106 "snapshot": false, 00:09:26.106 "clone": false, 00:09:26.106 "esnap_clone": false 00:09:26.106 } 00:09:26.106 } 00:09:26.106 } 00:09:26.106 ] 00:09:26.106 00:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:26.106 00:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da9882ad-21fe-4aab-ad92-3f5d44cad383 00:09:26.106 00:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:26.366 00:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:26.366 00:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da9882ad-21fe-4aab-ad92-3f5d44cad383 00:09:26.366 00:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:26.625 00:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:26.625 00:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete afa5338e-d00f-42dd-a650-3cd83845b3bf 00:09:26.884 00:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u da9882ad-21fe-4aab-ad92-3f5d44cad383 00:09:27.143 00:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:27.403 00:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:27.403 00:09:27.403 real 0m19.090s 00:09:27.403 user 0m48.161s 00:09:27.403 sys 0m4.598s 00:09:27.403 00:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:27.403 00:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:27.403 ************************************ 00:09:27.403 END TEST lvs_grow_dirty 00:09:27.403 ************************************ 00:09:27.403 00:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:27.403 00:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:27.403 00:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:27.403 00:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:27.403 00:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:27.403 00:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:27.403 00:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:27.403 00:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:27.403 00:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:27.403 nvmf_trace.0 00:09:27.403 00:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:27.403 00:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:27.403 00:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:27.403 00:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:09:27.403 00:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:27.403 00:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:09:27.403 00:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:27.403 00:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:27.403 rmmod nvme_tcp 00:09:27.403 rmmod nvme_fabrics 00:09:27.403 rmmod nvme_keyring 00:09:27.403 00:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:27.403 00:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:09:27.403 00:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:09:27.403 00:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1732972 ']' 00:09:27.403 00:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1732972 00:09:27.403 00:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1732972 ']' 00:09:27.403 00:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1732972 00:09:27.403 00:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:27.403 00:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:27.403 00:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1732972 00:09:27.403 00:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:27.403 00:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:27.403 00:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1732972' 00:09:27.403 killing process with pid 1732972 00:09:27.403 00:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1732972 00:09:27.403 00:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1732972 00:09:27.662 00:51:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:27.662 00:51:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:27.662 00:51:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:27.662 00:51:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:27.662 00:51:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:27.662 00:51:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.662 00:51:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.662 00:51:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:30.200 00:09:30.200 real 0m41.618s 00:09:30.200 user 1m10.289s 00:09:30.200 sys 0m8.425s 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:30.200 ************************************ 00:09:30.200 END TEST nvmf_lvs_grow 00:09:30.200 ************************************ 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:30.200 ************************************ 00:09:30.200 START TEST nvmf_bdev_io_wait 00:09:30.200 ************************************ 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:30.200 * Looking for test storage... 00:09:30.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:09:30.200 00:52:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:32.104 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:32.104 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:09:32.104 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:32.104 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:32.104 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:32.104 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:32.104 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:32.104 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:09:32.104 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:32.104 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:09:32.104 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:09:32.104 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:09:32.104 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:09:32.104 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:09:32.104 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:09:32.104 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:32.104 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:32.104 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:32.104 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:32.104 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:32.104 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:32.104 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:32.104 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:32.104 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:32.104 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:32.104 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:32.104 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:32.104 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:32.104 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:32.104 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:32.104 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:32.104 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:32.104 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:32.104 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:32.104 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:32.104 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:32.104 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:32.104 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:32.105 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:32.105 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:32.105 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:32.105 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:32.105 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:09:32.105 00:09:32.105 --- 10.0.0.2 ping statistics --- 00:09:32.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.105 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:32.105 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:32.105 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:09:32.105 00:09:32.105 --- 10.0.0.1 ping statistics --- 00:09:32.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.105 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1735382 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1735382 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1735382 ']' 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:32.105 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:32.105 [2024-07-26 00:52:02.317210] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:09:32.105 [2024-07-26 00:52:02.317292] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.105 EAL: No free 2048 kB hugepages reported on node 1 00:09:32.105 [2024-07-26 00:52:02.386421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:32.105 [2024-07-26 00:52:02.478781] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:32.105 [2024-07-26 00:52:02.478852] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:32.105 [2024-07-26 00:52:02.478867] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:32.105 [2024-07-26 00:52:02.478878] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:32.105 [2024-07-26 00:52:02.478888] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:32.105 [2024-07-26 00:52:02.478941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.105 [2024-07-26 00:52:02.478999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:32.105 [2024-07-26 00:52:02.479076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:32.106 [2024-07-26 00:52:02.479079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:32.363 [2024-07-26 00:52:02.643791] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:32.363 Malloc0 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:32.363 [2024-07-26 00:52:02.704573] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1735521 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1735523 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:32.363 { 00:09:32.363 "params": { 00:09:32.363 "name": "Nvme$subsystem", 00:09:32.363 "trtype": "$TEST_TRANSPORT", 00:09:32.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:32.363 "adrfam": "ipv4", 00:09:32.363 "trsvcid": "$NVMF_PORT", 00:09:32.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:32.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:32.363 "hdgst": ${hdgst:-false}, 00:09:32.363 "ddgst": ${ddgst:-false} 00:09:32.363 }, 00:09:32.363 "method": "bdev_nvme_attach_controller" 00:09:32.363 } 00:09:32.363 EOF 00:09:32.363 )") 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1735525 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:32.363 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:32.363 { 00:09:32.363 "params": { 00:09:32.363 "name": "Nvme$subsystem", 00:09:32.363 "trtype": "$TEST_TRANSPORT", 00:09:32.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:32.363 "adrfam": "ipv4", 00:09:32.363 "trsvcid": "$NVMF_PORT", 00:09:32.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:32.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:32.363 "hdgst": ${hdgst:-false}, 00:09:32.363 "ddgst": ${ddgst:-false} 00:09:32.363 }, 00:09:32.363 "method": "bdev_nvme_attach_controller" 00:09:32.364 } 00:09:32.364 EOF 00:09:32.364 )") 00:09:32.364 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1735528 00:09:32.364 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:32.364 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:32.364 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:32.364 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:32.364 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:32.364 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:32.364 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:32.364 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:32.364 { 00:09:32.364 "params": { 00:09:32.364 "name": "Nvme$subsystem", 00:09:32.364 "trtype": "$TEST_TRANSPORT", 00:09:32.364 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:32.364 "adrfam": "ipv4", 00:09:32.364 "trsvcid": "$NVMF_PORT", 00:09:32.364 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:32.364 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:32.364 "hdgst": ${hdgst:-false}, 00:09:32.364 "ddgst": ${ddgst:-false} 00:09:32.364 }, 00:09:32.364 "method": "bdev_nvme_attach_controller" 00:09:32.364 } 00:09:32.364 EOF 00:09:32.364 )") 00:09:32.364 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:32.364 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:32.364 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:32.364 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:32.364 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:32.364 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:32.364 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:32.364 { 00:09:32.364 "params": { 00:09:32.364 "name": "Nvme$subsystem", 00:09:32.364 "trtype": "$TEST_TRANSPORT", 00:09:32.364 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:32.364 "adrfam": "ipv4", 00:09:32.364 "trsvcid": "$NVMF_PORT", 00:09:32.364 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:32.364 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:32.364 "hdgst": ${hdgst:-false}, 00:09:32.364 "ddgst": ${ddgst:-false} 00:09:32.364 }, 00:09:32.364 "method": "bdev_nvme_attach_controller" 00:09:32.364 } 00:09:32.364 EOF 00:09:32.364 )") 00:09:32.364 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:32.364 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1735521 00:09:32.364 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:32.364 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:32.364 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:32.364 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:32.364 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:32.364 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:32.364 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:32.364 "params": { 00:09:32.364 "name": "Nvme1", 00:09:32.364 "trtype": "tcp", 00:09:32.364 "traddr": "10.0.0.2", 00:09:32.364 "adrfam": "ipv4", 00:09:32.364 "trsvcid": "4420", 00:09:32.364 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:32.364 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:32.364 "hdgst": false, 00:09:32.364 "ddgst": false 00:09:32.364 }, 00:09:32.364 "method": "bdev_nvme_attach_controller" 00:09:32.364 }' 00:09:32.364 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:32.364 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:32.364 "params": { 00:09:32.364 "name": "Nvme1", 00:09:32.364 "trtype": "tcp", 00:09:32.364 "traddr": "10.0.0.2", 00:09:32.364 "adrfam": "ipv4", 00:09:32.364 "trsvcid": "4420", 00:09:32.364 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:32.364 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:32.364 "hdgst": false, 00:09:32.364 "ddgst": false 00:09:32.364 }, 00:09:32.364 "method": "bdev_nvme_attach_controller" 00:09:32.364 }' 00:09:32.364 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:32.364 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:32.364 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:32.364 "params": { 00:09:32.364 "name": "Nvme1", 00:09:32.364 "trtype": "tcp", 00:09:32.364 "traddr": "10.0.0.2", 00:09:32.364 "adrfam": "ipv4", 00:09:32.364 "trsvcid": "4420", 00:09:32.364 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:32.364 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:32.364 "hdgst": false, 00:09:32.364 "ddgst": false 00:09:32.364 }, 00:09:32.364 "method": "bdev_nvme_attach_controller" 00:09:32.364 }' 00:09:32.364 00:52:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:32.364 "params": { 00:09:32.364 "name": "Nvme1", 00:09:32.364 "trtype": "tcp", 00:09:32.364 "traddr": "10.0.0.2", 00:09:32.364 "adrfam": "ipv4", 00:09:32.364 "trsvcid": "4420", 00:09:32.364 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:32.364 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:32.364 "hdgst": false, 00:09:32.364 "ddgst": false 00:09:32.364 }, 00:09:32.364 "method": "bdev_nvme_attach_controller" 00:09:32.364 }' 00:09:32.364 [2024-07-26 00:52:02.752219] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:09:32.364 [2024-07-26 00:52:02.752220] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:09:32.364 [2024-07-26 00:52:02.752227] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:09:32.364 [2024-07-26 00:52:02.752220] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:09:32.364 [2024-07-26 00:52:02.752306] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-26 00:52:02.752307] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-26 00:52:02.752307] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-26 00:52:02.752308] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:32.364 --proc-type=auto ] 00:09:32.364 --proc-type=auto ] 00:09:32.364 --proc-type=auto ] 00:09:32.620 EAL: No free 2048 kB hugepages reported on node 1 00:09:32.620 EAL: No free 2048 kB hugepages reported on node 1 00:09:32.620 [2024-07-26 00:52:02.917591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.620 EAL: No free 2048 kB hugepages reported on node 1 00:09:32.620 [2024-07-26 00:52:02.992295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:32.620 [2024-07-26 00:52:03.015730] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.878 EAL: No free 2048 kB hugepages reported on node 1 00:09:32.878 [2024-07-26 00:52:03.090825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:32.878 [2024-07-26 00:52:03.113526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.878 [2024-07-26 00:52:03.187911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:09:32.878 [2024-07-26 00:52:03.213565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.878 [2024-07-26 00:52:03.292748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:33.135 Running I/O for 1 seconds... 00:09:33.135 Running I/O for 1 seconds... 00:09:33.135 Running I/O for 1 seconds... 00:09:33.391 Running I/O for 1 seconds... 00:09:34.325 00:09:34.325 Latency(us) 00:09:34.325 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:34.325 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:34.325 Nvme1n1 : 1.02 6647.43 25.97 0.00 0.00 19115.08 7670.14 29515.47 00:09:34.325 =================================================================================================================== 00:09:34.325 Total : 6647.43 25.97 0.00 0.00 19115.08 7670.14 29515.47 00:09:34.325 00:09:34.325 Latency(us) 00:09:34.325 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:34.325 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:34.325 Nvme1n1 : 1.01 10779.62 42.11 0.00 0.00 11825.80 5849.69 21262.79 00:09:34.325 =================================================================================================================== 00:09:34.325 Total : 10779.62 42.11 0.00 0.00 11825.80 5849.69 21262.79 00:09:34.325 00:09:34.325 Latency(us) 00:09:34.325 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:34.325 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:34.325 Nvme1n1 : 1.01 6216.60 24.28 0.00 0.00 20510.01 7815.77 45244.11 00:09:34.325 =================================================================================================================== 00:09:34.325 Total : 6216.60 24.28 0.00 0.00 20510.01 7815.77 45244.11 00:09:34.325 00:09:34.325 Latency(us) 00:09:34.325 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:34.325 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:34.325 Nvme1n1 : 1.00 196514.63 767.64 0.00 0.00 648.90 274.58 873.81 00:09:34.325 =================================================================================================================== 00:09:34.325 Total : 196514.63 767.64 0.00 0.00 648.90 274.58 873.81 00:09:34.325 00:52:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1735523 00:09:34.583 00:52:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1735525 00:09:34.583 00:52:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1735528 00:09:34.583 00:52:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:34.583 00:52:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.583 00:52:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.583 00:52:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.583 00:52:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:34.583 00:52:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:34.583 00:52:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:34.583 00:52:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:09:34.583 00:52:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:34.583 00:52:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:09:34.583 00:52:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:34.583 00:52:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:34.583 rmmod nvme_tcp 00:09:34.583 rmmod nvme_fabrics 00:09:34.583 rmmod nvme_keyring 00:09:34.583 00:52:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:34.583 00:52:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:09:34.583 00:52:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:09:34.583 00:52:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1735382 ']' 00:09:34.583 00:52:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1735382 00:09:34.583 00:52:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1735382 ']' 00:09:34.583 00:52:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1735382 00:09:34.583 00:52:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:09:34.583 00:52:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:34.583 00:52:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1735382 00:09:34.583 00:52:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:34.583 00:52:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:34.584 00:52:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1735382' 00:09:34.584 killing process with pid 1735382 00:09:34.584 00:52:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1735382 00:09:34.584 00:52:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1735382 00:09:34.842 00:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:34.842 00:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:34.842 00:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:34.842 00:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:34.842 00:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:34.842 00:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.842 00:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.842 00:52:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:37.377 00:09:37.377 real 0m7.147s 00:09:37.377 user 0m15.856s 00:09:37.377 sys 0m3.747s 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:37.377 ************************************ 00:09:37.377 END TEST nvmf_bdev_io_wait 00:09:37.377 ************************************ 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:37.377 ************************************ 00:09:37.377 START TEST nvmf_queue_depth 00:09:37.377 ************************************ 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:37.377 * Looking for test storage... 00:09:37.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:09:37.377 00:52:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:39.283 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:39.283 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:39.283 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:39.283 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:39.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:39.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:09:39.283 00:09:39.283 --- 10.0.0.2 ping statistics --- 00:09:39.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.283 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:39.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:39.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:09:39.283 00:09:39.283 --- 10.0.0.1 ping statistics --- 00:09:39.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.283 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:39.283 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:09:39.284 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:39.284 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:39.284 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:39.284 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:39.284 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:39.284 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:39.284 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:39.284 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:39.284 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:39.284 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:39.284 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:39.284 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1737740 00:09:39.284 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:39.284 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1737740 00:09:39.284 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1737740 ']' 00:09:39.284 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.284 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:39.284 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.284 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:39.284 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:39.284 [2024-07-26 00:52:09.533967] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:09:39.284 [2024-07-26 00:52:09.534044] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:39.284 EAL: No free 2048 kB hugepages reported on node 1 00:09:39.284 [2024-07-26 00:52:09.602110] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.284 [2024-07-26 00:52:09.687542] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:39.284 [2024-07-26 00:52:09.687603] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:39.284 [2024-07-26 00:52:09.687622] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:39.284 [2024-07-26 00:52:09.687655] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:39.284 [2024-07-26 00:52:09.687672] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:39.284 [2024-07-26 00:52:09.687724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:39.542 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:39.542 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:39.542 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:39.542 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:39.542 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:39.542 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:39.542 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:39.542 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.542 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:39.542 [2024-07-26 00:52:09.822240] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:39.542 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.542 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:39.542 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.542 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:39.542 Malloc0 00:09:39.542 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.543 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:39.543 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.543 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:39.543 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.543 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:39.543 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.543 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:39.543 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.543 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:39.543 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.543 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:39.543 [2024-07-26 00:52:09.882337] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:39.543 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.543 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1737770 00:09:39.543 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:39.543 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:39.543 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1737770 /var/tmp/bdevperf.sock 00:09:39.543 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1737770 ']' 00:09:39.543 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:39.543 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:39.543 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:39.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:39.543 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:39.543 00:52:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:39.543 [2024-07-26 00:52:09.927785] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:09:39.543 [2024-07-26 00:52:09.927847] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1737770 ] 00:09:39.543 EAL: No free 2048 kB hugepages reported on node 1 00:09:39.801 [2024-07-26 00:52:09.987734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.801 [2024-07-26 00:52:10.083981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.801 00:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:39.801 00:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:39.801 00:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:39.802 00:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.802 00:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:40.060 NVMe0n1 00:09:40.060 00:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.060 00:52:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:40.060 Running I/O for 10 seconds... 00:09:52.277 00:09:52.277 Latency(us) 00:09:52.277 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.277 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:52.277 Verification LBA range: start 0x0 length 0x4000 00:09:52.277 NVMe0n1 : 10.08 8121.87 31.73 0.00 0.00 125556.97 20388.98 88546.42 00:09:52.277 =================================================================================================================== 00:09:52.277 Total : 8121.87 31.73 0.00 0.00 125556.97 20388.98 88546.42 00:09:52.277 0 00:09:52.277 00:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1737770 00:09:52.277 00:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1737770 ']' 00:09:52.277 00:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1737770 00:09:52.277 00:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:52.277 00:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:52.277 00:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1737770 00:09:52.277 00:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:52.277 00:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:52.277 00:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1737770' 00:09:52.277 killing process with pid 1737770 00:09:52.277 00:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1737770 00:09:52.277 Received shutdown signal, test time was about 10.000000 seconds 00:09:52.277 00:09:52.277 Latency(us) 00:09:52.277 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.277 =================================================================================================================== 00:09:52.277 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:52.277 00:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1737770 00:09:52.277 00:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:52.277 00:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:52.277 00:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:52.277 00:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:09:52.277 00:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:52.277 00:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:09:52.277 00:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:52.277 00:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:52.277 rmmod nvme_tcp 00:09:52.277 rmmod nvme_fabrics 00:09:52.277 rmmod nvme_keyring 00:09:52.277 00:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:52.277 00:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:09:52.277 00:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:09:52.277 00:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1737740 ']' 00:09:52.277 00:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1737740 00:09:52.277 00:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1737740 ']' 00:09:52.278 00:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1737740 00:09:52.278 00:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:52.278 00:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:52.278 00:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1737740 00:09:52.278 00:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:52.278 00:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:52.278 00:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1737740' 00:09:52.278 killing process with pid 1737740 00:09:52.278 00:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1737740 00:09:52.278 00:52:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1737740 00:09:52.278 00:52:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:52.278 00:52:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:52.278 00:52:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:52.278 00:52:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:52.278 00:52:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:52.278 00:52:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.278 00:52:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.278 00:52:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.872 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:52.872 00:09:52.872 real 0m15.911s 00:09:52.872 user 0m21.486s 00:09:52.872 sys 0m3.463s 00:09:52.872 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:52.872 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:52.872 ************************************ 00:09:52.872 END TEST nvmf_queue_depth 00:09:52.872 ************************************ 00:09:52.872 00:52:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:52.872 00:52:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:52.872 00:52:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:52.872 00:52:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:52.872 ************************************ 00:09:52.872 START TEST nvmf_target_multipath 00:09:52.872 ************************************ 00:09:52.872 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:53.132 * Looking for test storage... 00:09:53.132 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:09:53.132 00:52:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:55.035 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:55.035 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:09:55.035 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:55.035 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:55.035 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:55.035 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:55.035 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:55.035 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:09:55.035 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:55.035 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:09:55.035 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:09:55.035 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:09:55.035 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:09:55.035 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:09:55.035 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:09:55.035 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:55.035 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:55.035 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:55.035 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:55.035 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:55.035 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:55.035 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:55.035 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:55.035 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:55.036 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:55.036 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:55.036 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:55.036 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:55.036 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:55.295 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:55.295 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:55.295 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:55.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:55.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:09:55.295 00:09:55.295 --- 10.0.0.2 ping statistics --- 00:09:55.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.295 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:09:55.295 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:55.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:55.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:09:55.295 00:09:55.295 --- 10.0.0.1 ping statistics --- 00:09:55.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.295 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:09:55.295 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:55.295 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:09:55.295 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:55.295 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:55.295 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:55.295 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:55.295 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:55.295 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:55.295 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:55.295 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:55.295 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:55.295 only one NIC for nvmf test 00:09:55.295 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:55.295 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:55.295 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:55.295 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:55.295 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:55.295 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:55.295 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:55.295 rmmod nvme_tcp 00:09:55.295 rmmod nvme_fabrics 00:09:55.295 rmmod nvme_keyring 00:09:55.295 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:55.295 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:55.295 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:55.295 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:55.295 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:55.295 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:55.295 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:55.295 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:55.295 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:55.295 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.295 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.295 00:52:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.201 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:57.201 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:57.201 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:57.201 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:57.201 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:57.461 00:09:57.461 real 0m4.392s 00:09:57.461 user 0m0.843s 00:09:57.461 sys 0m1.540s 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:57.461 ************************************ 00:09:57.461 END TEST nvmf_target_multipath 00:09:57.461 ************************************ 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:57.461 ************************************ 00:09:57.461 START TEST nvmf_zcopy 00:09:57.461 ************************************ 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:57.461 * Looking for test storage... 00:09:57.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:09:57.461 00:52:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:59.363 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:59.364 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:59.364 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:59.364 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:59.364 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:59.364 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:59.623 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:59.623 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:59.623 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:59.623 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:59.623 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:59.623 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:59.623 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:59.623 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:59.623 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:09:59.623 00:09:59.623 --- 10.0.0.2 ping statistics --- 00:09:59.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.623 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:09:59.623 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:59.623 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:59.623 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:09:59.623 00:09:59.623 --- 10.0.0.1 ping statistics --- 00:09:59.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.623 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:09:59.623 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:59.623 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:09:59.623 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:59.623 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:59.623 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:59.623 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:59.623 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:59.623 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:59.623 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:59.623 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:59.623 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:59.623 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:59.623 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:59.623 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1742967 00:09:59.623 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:59.623 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1742967 00:09:59.623 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1742967 ']' 00:09:59.623 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.623 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:59.623 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.624 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:59.624 00:52:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:59.624 [2024-07-26 00:52:29.972550] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:09:59.624 [2024-07-26 00:52:29.972634] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.624 EAL: No free 2048 kB hugepages reported on node 1 00:09:59.624 [2024-07-26 00:52:30.040127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.883 [2024-07-26 00:52:30.131433] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:59.883 [2024-07-26 00:52:30.131485] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:59.883 [2024-07-26 00:52:30.131511] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:59.883 [2024-07-26 00:52:30.131533] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:59.883 [2024-07-26 00:52:30.131550] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:59.883 [2024-07-26 00:52:30.131586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.883 00:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:59.883 00:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:09:59.883 00:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:59.883 00:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:59.883 00:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:59.883 00:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:59.883 00:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:59.883 00:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:59.883 00:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.883 00:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:59.883 [2024-07-26 00:52:30.282311] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:59.883 00:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.883 00:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:59.883 00:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.883 00:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:59.883 00:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.883 00:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:59.883 00:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.883 00:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:59.883 [2024-07-26 00:52:30.298561] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:59.883 00:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.883 00:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:59.883 00:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.883 00:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:00.144 00:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.144 00:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:00.144 00:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.144 00:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:00.144 malloc0 00:10:00.144 00:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.144 00:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:00.144 00:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.144 00:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:00.144 00:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.144 00:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:00.144 00:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:00.144 00:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:00.144 00:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:00.144 00:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:00.144 00:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:00.144 { 00:10:00.144 "params": { 00:10:00.144 "name": "Nvme$subsystem", 00:10:00.144 "trtype": "$TEST_TRANSPORT", 00:10:00.144 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:00.144 "adrfam": "ipv4", 00:10:00.144 "trsvcid": "$NVMF_PORT", 00:10:00.144 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:00.144 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:00.144 "hdgst": ${hdgst:-false}, 00:10:00.144 "ddgst": ${ddgst:-false} 00:10:00.144 }, 00:10:00.144 "method": "bdev_nvme_attach_controller" 00:10:00.144 } 00:10:00.144 EOF 00:10:00.144 )") 00:10:00.144 00:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:00.144 00:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:00.144 00:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:00.144 00:52:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:00.144 "params": { 00:10:00.144 "name": "Nvme1", 00:10:00.144 "trtype": "tcp", 00:10:00.144 "traddr": "10.0.0.2", 00:10:00.144 "adrfam": "ipv4", 00:10:00.144 "trsvcid": "4420", 00:10:00.144 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:00.144 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:00.144 "hdgst": false, 00:10:00.144 "ddgst": false 00:10:00.144 }, 00:10:00.144 "method": "bdev_nvme_attach_controller" 00:10:00.144 }' 00:10:00.144 [2024-07-26 00:52:30.389173] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:10:00.144 [2024-07-26 00:52:30.389241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1742993 ] 00:10:00.144 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.144 [2024-07-26 00:52:30.452818] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.144 [2024-07-26 00:52:30.546648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.404 Running I/O for 10 seconds... 00:10:12.626 00:10:12.626 Latency(us) 00:10:12.626 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:12.626 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:12.626 Verification LBA range: start 0x0 length 0x1000 00:10:12.626 Nvme1n1 : 10.01 5768.21 45.06 0.00 0.00 22129.10 788.86 31651.46 00:10:12.626 =================================================================================================================== 00:10:12.626 Total : 5768.21 45.06 0.00 0.00 22129.10 788.86 31651.46 00:10:12.626 00:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1744277 00:10:12.626 00:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:12.626 00:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:12.626 00:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:12.626 00:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:12.626 00:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:12.626 00:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:12.626 00:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:12.626 00:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:12.626 { 00:10:12.626 "params": { 00:10:12.626 "name": "Nvme$subsystem", 00:10:12.626 "trtype": "$TEST_TRANSPORT", 00:10:12.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:12.626 "adrfam": "ipv4", 00:10:12.626 "trsvcid": "$NVMF_PORT", 00:10:12.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:12.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:12.626 "hdgst": ${hdgst:-false}, 00:10:12.626 "ddgst": ${ddgst:-false} 00:10:12.626 }, 00:10:12.626 "method": "bdev_nvme_attach_controller" 00:10:12.626 } 00:10:12.626 EOF 00:10:12.626 )") 00:10:12.626 00:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:12.626 [2024-07-26 00:52:41.048034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.626 [2024-07-26 00:52:41.048117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.626 00:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:12.626 00:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:12.626 00:52:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:12.626 "params": { 00:10:12.626 "name": "Nvme1", 00:10:12.626 "trtype": "tcp", 00:10:12.626 "traddr": "10.0.0.2", 00:10:12.626 "adrfam": "ipv4", 00:10:12.626 "trsvcid": "4420", 00:10:12.626 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:12.626 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:12.626 "hdgst": false, 00:10:12.626 "ddgst": false 00:10:12.626 }, 00:10:12.626 "method": "bdev_nvme_attach_controller" 00:10:12.626 }' 00:10:12.626 [2024-07-26 00:52:41.055995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.626 [2024-07-26 00:52:41.056026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.626 [2024-07-26 00:52:41.064028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.626 [2024-07-26 00:52:41.064072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.626 [2024-07-26 00:52:41.072027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.626 [2024-07-26 00:52:41.072052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.626 [2024-07-26 00:52:41.080055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.626 [2024-07-26 00:52:41.080087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.626 [2024-07-26 00:52:41.086997] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:10:12.626 [2024-07-26 00:52:41.087079] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1744277 ] 00:10:12.626 [2024-07-26 00:52:41.088090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.626 [2024-07-26 00:52:41.088116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.626 [2024-07-26 00:52:41.096108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.626 [2024-07-26 00:52:41.096143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.626 [2024-07-26 00:52:41.104129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.626 [2024-07-26 00:52:41.104152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.626 [2024-07-26 00:52:41.112138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.626 [2024-07-26 00:52:41.112161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.626 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.626 [2024-07-26 00:52:41.120176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.626 [2024-07-26 00:52:41.120200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.626 [2024-07-26 00:52:41.128208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.626 [2024-07-26 00:52:41.128231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.626 [2024-07-26 00:52:41.136231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.626 [2024-07-26 00:52:41.136253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.626 [2024-07-26 00:52:41.144254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.626 [2024-07-26 00:52:41.144277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.626 [2024-07-26 00:52:41.150212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.626 [2024-07-26 00:52:41.152277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.626 [2024-07-26 00:52:41.152300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.626 [2024-07-26 00:52:41.160329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.626 [2024-07-26 00:52:41.160381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.626 [2024-07-26 00:52:41.168328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.626 [2024-07-26 00:52:41.168374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.626 [2024-07-26 00:52:41.176354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.626 [2024-07-26 00:52:41.176376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.626 [2024-07-26 00:52:41.184381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.626 [2024-07-26 00:52:41.184408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.626 [2024-07-26 00:52:41.192405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.626 [2024-07-26 00:52:41.192431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.626 [2024-07-26 00:52:41.200433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.626 [2024-07-26 00:52:41.200461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.626 [2024-07-26 00:52:41.208478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.626 [2024-07-26 00:52:41.208514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.626 [2024-07-26 00:52:41.216469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.626 [2024-07-26 00:52:41.216495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.626 [2024-07-26 00:52:41.224491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.626 [2024-07-26 00:52:41.224516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.626 [2024-07-26 00:52:41.232515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.626 [2024-07-26 00:52:41.232541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.240537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.240564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.246198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.627 [2024-07-26 00:52:41.248563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.248592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.256582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.256610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.264632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.264670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.272657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.272694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.280676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.280714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.288699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.288739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.296723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.296761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.304742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.304781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.312770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.312809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.320765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.320794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.328813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.328850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.336835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.336873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.344831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.344874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.352852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.352878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.360873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.360899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.368910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.368940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.376938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.376967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.384963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.384992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.392986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.393014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.401072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.401115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.409108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.409134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.417127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.417152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.425144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.425168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.433168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.433195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 Running I/O for 5 seconds... 00:10:12.627 [2024-07-26 00:52:41.441186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.441211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.454037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.454087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.464622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.464654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.475618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.475647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.486763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.486794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.498142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.498172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.509002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.509031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.520217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.520245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.533192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.533219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.544090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.544118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.555825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.555857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.567004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.567032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.578393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.578423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.591352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.591379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.602191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.602218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.613579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.613610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.626946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.626978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.637932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.637962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.649409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.649437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.662779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.662810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.673787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.673819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.685143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.685170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.698393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.698420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.709125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.709152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.720203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.720231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.733368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.627 [2024-07-26 00:52:41.733395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.627 [2024-07-26 00:52:41.744029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:41.744057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:41.754978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:41.755006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:41.767839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:41.767866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:41.777499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:41.777530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:41.788950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:41.788993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:41.800505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:41.800533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:41.811664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:41.811695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:41.822887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:41.822914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:41.834177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:41.834206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:41.845474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:41.845501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:41.856326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:41.856353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:41.869470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:41.869498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:41.879854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:41.879881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:41.890739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:41.890774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:41.901794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:41.901823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:41.913255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:41.913283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:41.924226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:41.924254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:41.934928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:41.934956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:41.948430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:41.948471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:41.959138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:41.959166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:41.970452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:41.970480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:41.981735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:41.981763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:41.992856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:41.992887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:42.005947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:42.005978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:42.015798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:42.015825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:42.027511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:42.027542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:42.038802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:42.038832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:42.049857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:42.049888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:42.061176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:42.061204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:42.072272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:42.072300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:42.084011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:42.084044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:42.095631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:42.095663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:42.106715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:42.106757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:42.118230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:42.118259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:42.129786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:42.129818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:42.140688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:42.140720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:42.152302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:42.152330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:42.163419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:42.163453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:42.174624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:42.174652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:42.185662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:42.185693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:42.196686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:42.196717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:42.209779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:42.209811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:42.220298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:42.220325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:42.231784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:42.231815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:42.243079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:42.243114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:42.254367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:42.254398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:42.265618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:42.265649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:42.276578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:42.276609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:42.287642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:42.287673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:42.300452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:42.300480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:42.311366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:42.311394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:42.322590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.628 [2024-07-26 00:52:42.322621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.628 [2024-07-26 00:52:42.333721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.333752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.344745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.344776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.357456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.357487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.367711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.367742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.379557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.379595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.390374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.390402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.401615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.401646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.414621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.414654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.425311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.425338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.436804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.436835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.447857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.447888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.459098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.459125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.470249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.470276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.481450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.481481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.492461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.492506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.503552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.503583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.516619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.516651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.527492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.527523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.538804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.538835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.549783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.549814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.560654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.560685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.573916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.573947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.584791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.584821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.595773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.595813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.606942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.606973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.618294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.618322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.631935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.631966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.642944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.642975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.654356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.654384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.666082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.666110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.677123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.677150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.690189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.690217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.700574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.700605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.711954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.711985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.723278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.723306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.736285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.736318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.747201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.747228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.758448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.758478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.769878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.769909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.781493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.781525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.792808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.792837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.805725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.805756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.816418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.816457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.827819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.827850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.840891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.840922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.851327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.851354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.862637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.862668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.873803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.873834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.885180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.885208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.896387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.896415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.907186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.907215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.918081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.918110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.629 [2024-07-26 00:52:42.931324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.629 [2024-07-26 00:52:42.931352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.630 [2024-07-26 00:52:42.941894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.630 [2024-07-26 00:52:42.941925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.630 [2024-07-26 00:52:42.952946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.630 [2024-07-26 00:52:42.952976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.630 [2024-07-26 00:52:42.964103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.630 [2024-07-26 00:52:42.964130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.630 [2024-07-26 00:52:42.974953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.630 [2024-07-26 00:52:42.974983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.630 [2024-07-26 00:52:42.986381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.630 [2024-07-26 00:52:42.986412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.630 [2024-07-26 00:52:42.997375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.630 [2024-07-26 00:52:42.997403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.630 [2024-07-26 00:52:43.010359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.630 [2024-07-26 00:52:43.010386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.630 [2024-07-26 00:52:43.020925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.630 [2024-07-26 00:52:43.020956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.630 [2024-07-26 00:52:43.032025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.630 [2024-07-26 00:52:43.032053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.630 [2024-07-26 00:52:43.044874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.630 [2024-07-26 00:52:43.044905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.888 [2024-07-26 00:52:43.054768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.888 [2024-07-26 00:52:43.054800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.888 [2024-07-26 00:52:43.066285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.888 [2024-07-26 00:52:43.066316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.888 [2024-07-26 00:52:43.077299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.888 [2024-07-26 00:52:43.077327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.888 [2024-07-26 00:52:43.088773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.888 [2024-07-26 00:52:43.088801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.888 [2024-07-26 00:52:43.100810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.888 [2024-07-26 00:52:43.100842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.888 [2024-07-26 00:52:43.113830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.888 [2024-07-26 00:52:43.113860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.888 [2024-07-26 00:52:43.124494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.888 [2024-07-26 00:52:43.124526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.888 [2024-07-26 00:52:43.135682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.888 [2024-07-26 00:52:43.135713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.888 [2024-07-26 00:52:43.147470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.888 [2024-07-26 00:52:43.147497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.888 [2024-07-26 00:52:43.158995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.888 [2024-07-26 00:52:43.159026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.888 [2024-07-26 00:52:43.170378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.888 [2024-07-26 00:52:43.170405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.888 [2024-07-26 00:52:43.181588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.888 [2024-07-26 00:52:43.181616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.888 [2024-07-26 00:52:43.192953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.888 [2024-07-26 00:52:43.192984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.888 [2024-07-26 00:52:43.203999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.888 [2024-07-26 00:52:43.204029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.888 [2024-07-26 00:52:43.215506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.888 [2024-07-26 00:52:43.215538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.888 [2024-07-26 00:52:43.228734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.888 [2024-07-26 00:52:43.228765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.888 [2024-07-26 00:52:43.238538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.888 [2024-07-26 00:52:43.238570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.888 [2024-07-26 00:52:43.250623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.888 [2024-07-26 00:52:43.250656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.888 [2024-07-26 00:52:43.261787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.888 [2024-07-26 00:52:43.261818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.888 [2024-07-26 00:52:43.273013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.888 [2024-07-26 00:52:43.273041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.888 [2024-07-26 00:52:43.284292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.888 [2024-07-26 00:52:43.284320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.888 [2024-07-26 00:52:43.295198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.888 [2024-07-26 00:52:43.295227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.889 [2024-07-26 00:52:43.306241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.889 [2024-07-26 00:52:43.306269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.150 [2024-07-26 00:52:43.319226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.150 [2024-07-26 00:52:43.319255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.150 [2024-07-26 00:52:43.329440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.150 [2024-07-26 00:52:43.329471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.150 [2024-07-26 00:52:43.341408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.150 [2024-07-26 00:52:43.341439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.150 [2024-07-26 00:52:43.352808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.150 [2024-07-26 00:52:43.352841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.150 [2024-07-26 00:52:43.365951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.150 [2024-07-26 00:52:43.365983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.150 [2024-07-26 00:52:43.376438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.150 [2024-07-26 00:52:43.376469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.150 [2024-07-26 00:52:43.388501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.150 [2024-07-26 00:52:43.388532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.150 [2024-07-26 00:52:43.400181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.150 [2024-07-26 00:52:43.400208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.150 [2024-07-26 00:52:43.411300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.150 [2024-07-26 00:52:43.411327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.150 [2024-07-26 00:52:43.422198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.150 [2024-07-26 00:52:43.422225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.150 [2024-07-26 00:52:43.435133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.150 [2024-07-26 00:52:43.435161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.150 [2024-07-26 00:52:43.444988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.150 [2024-07-26 00:52:43.445019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.150 [2024-07-26 00:52:43.456726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.150 [2024-07-26 00:52:43.456753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.150 [2024-07-26 00:52:43.467726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.150 [2024-07-26 00:52:43.467757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.150 [2024-07-26 00:52:43.478941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.150 [2024-07-26 00:52:43.478973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.150 [2024-07-26 00:52:43.492452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.150 [2024-07-26 00:52:43.492483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.150 [2024-07-26 00:52:43.502959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.150 [2024-07-26 00:52:43.502989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.150 [2024-07-26 00:52:43.513311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.150 [2024-07-26 00:52:43.513338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.150 [2024-07-26 00:52:43.524703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.150 [2024-07-26 00:52:43.524733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.150 [2024-07-26 00:52:43.537670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.150 [2024-07-26 00:52:43.537701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.150 [2024-07-26 00:52:43.548136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.150 [2024-07-26 00:52:43.548164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.150 [2024-07-26 00:52:43.559209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.150 [2024-07-26 00:52:43.559237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.150 [2024-07-26 00:52:43.570168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.150 [2024-07-26 00:52:43.570196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.413 [2024-07-26 00:52:43.581469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.413 [2024-07-26 00:52:43.581498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.413 [2024-07-26 00:52:43.592217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.413 [2024-07-26 00:52:43.592244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.413 [2024-07-26 00:52:43.603298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.413 [2024-07-26 00:52:43.603325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.413 [2024-07-26 00:52:43.616110] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.413 [2024-07-26 00:52:43.616137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.413 [2024-07-26 00:52:43.627272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.413 [2024-07-26 00:52:43.627301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.413 [2024-07-26 00:52:43.638214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.413 [2024-07-26 00:52:43.638241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.413 [2024-07-26 00:52:43.651067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.413 [2024-07-26 00:52:43.651095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.413 [2024-07-26 00:52:43.661626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.413 [2024-07-26 00:52:43.661657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.413 [2024-07-26 00:52:43.672673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.413 [2024-07-26 00:52:43.672709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.413 [2024-07-26 00:52:43.683977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.413 [2024-07-26 00:52:43.684007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.413 [2024-07-26 00:52:43.695545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.413 [2024-07-26 00:52:43.695576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.413 [2024-07-26 00:52:43.706614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.413 [2024-07-26 00:52:43.706642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.413 [2024-07-26 00:52:43.717347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.413 [2024-07-26 00:52:43.717375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.413 [2024-07-26 00:52:43.728710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.413 [2024-07-26 00:52:43.728737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.413 [2024-07-26 00:52:43.740249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.413 [2024-07-26 00:52:43.740277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.413 [2024-07-26 00:52:43.753352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.413 [2024-07-26 00:52:43.753380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.413 [2024-07-26 00:52:43.764014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.413 [2024-07-26 00:52:43.764042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.413 [2024-07-26 00:52:43.775006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.413 [2024-07-26 00:52:43.775037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.413 [2024-07-26 00:52:43.788466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.413 [2024-07-26 00:52:43.788510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.413 [2024-07-26 00:52:43.799212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.413 [2024-07-26 00:52:43.799239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.413 [2024-07-26 00:52:43.810626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.413 [2024-07-26 00:52:43.810659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.413 [2024-07-26 00:52:43.821858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.413 [2024-07-26 00:52:43.821889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.413 [2024-07-26 00:52:43.833331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.413 [2024-07-26 00:52:43.833358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.670 [2024-07-26 00:52:43.844348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.670 [2024-07-26 00:52:43.844376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.670 [2024-07-26 00:52:43.855274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.670 [2024-07-26 00:52:43.855302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.670 [2024-07-26 00:52:43.866477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.670 [2024-07-26 00:52:43.866505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.670 [2024-07-26 00:52:43.877848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.670 [2024-07-26 00:52:43.877879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.670 [2024-07-26 00:52:43.889087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.670 [2024-07-26 00:52:43.889150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.670 [2024-07-26 00:52:43.900372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.670 [2024-07-26 00:52:43.900400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.670 [2024-07-26 00:52:43.913656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.670 [2024-07-26 00:52:43.913686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.670 [2024-07-26 00:52:43.924269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.670 [2024-07-26 00:52:43.924297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.670 [2024-07-26 00:52:43.935525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.670 [2024-07-26 00:52:43.935556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.670 [2024-07-26 00:52:43.948369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.670 [2024-07-26 00:52:43.948397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.670 [2024-07-26 00:52:43.958262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.670 [2024-07-26 00:52:43.958290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.670 [2024-07-26 00:52:43.969949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.670 [2024-07-26 00:52:43.969978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.670 [2024-07-26 00:52:43.981370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.670 [2024-07-26 00:52:43.981398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.670 [2024-07-26 00:52:43.992806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.670 [2024-07-26 00:52:43.992836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.670 [2024-07-26 00:52:44.004089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.670 [2024-07-26 00:52:44.004116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.670 [2024-07-26 00:52:44.016950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.670 [2024-07-26 00:52:44.016977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.670 [2024-07-26 00:52:44.027160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.670 [2024-07-26 00:52:44.027188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.670 [2024-07-26 00:52:44.039272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.670 [2024-07-26 00:52:44.039300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.670 [2024-07-26 00:52:44.050698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.670 [2024-07-26 00:52:44.050730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.671 [2024-07-26 00:52:44.064032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.671 [2024-07-26 00:52:44.064071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.671 [2024-07-26 00:52:44.075197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.671 [2024-07-26 00:52:44.075226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.671 [2024-07-26 00:52:44.086208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.671 [2024-07-26 00:52:44.086238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.928 [2024-07-26 00:52:44.100417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.928 [2024-07-26 00:52:44.100448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.928 [2024-07-26 00:52:44.112038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.928 [2024-07-26 00:52:44.112085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.928 [2024-07-26 00:52:44.123604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.928 [2024-07-26 00:52:44.123634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.928 [2024-07-26 00:52:44.135264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.928 [2024-07-26 00:52:44.135292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.928 [2024-07-26 00:52:44.145375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.928 [2024-07-26 00:52:44.145403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.928 [2024-07-26 00:52:44.155523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.928 [2024-07-26 00:52:44.155551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.928 [2024-07-26 00:52:44.166126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.928 [2024-07-26 00:52:44.166154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.928 [2024-07-26 00:52:44.178346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.928 [2024-07-26 00:52:44.178373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.928 [2024-07-26 00:52:44.188464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.928 [2024-07-26 00:52:44.188491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.928 [2024-07-26 00:52:44.198873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.928 [2024-07-26 00:52:44.198901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.928 [2024-07-26 00:52:44.209184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.928 [2024-07-26 00:52:44.209211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.928 [2024-07-26 00:52:44.219367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.928 [2024-07-26 00:52:44.219394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.928 [2024-07-26 00:52:44.229382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.928 [2024-07-26 00:52:44.229410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.928 [2024-07-26 00:52:44.239921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.928 [2024-07-26 00:52:44.239948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.928 [2024-07-26 00:52:44.252550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.928 [2024-07-26 00:52:44.252578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.928 [2024-07-26 00:52:44.262798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.928 [2024-07-26 00:52:44.262825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.928 [2024-07-26 00:52:44.273481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.928 [2024-07-26 00:52:44.273509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.928 [2024-07-26 00:52:44.285538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.929 [2024-07-26 00:52:44.285566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.929 [2024-07-26 00:52:44.295652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.929 [2024-07-26 00:52:44.295679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.929 [2024-07-26 00:52:44.306037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.929 [2024-07-26 00:52:44.306073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.929 [2024-07-26 00:52:44.318428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.929 [2024-07-26 00:52:44.318462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.929 [2024-07-26 00:52:44.328625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.929 [2024-07-26 00:52:44.328653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.929 [2024-07-26 00:52:44.339206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.929 [2024-07-26 00:52:44.339234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.929 [2024-07-26 00:52:44.351721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.929 [2024-07-26 00:52:44.351749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.188 [2024-07-26 00:52:44.363636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.188 [2024-07-26 00:52:44.363668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.188 [2024-07-26 00:52:44.373373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.188 [2024-07-26 00:52:44.373405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.188 [2024-07-26 00:52:44.385764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.188 [2024-07-26 00:52:44.385806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.188 [2024-07-26 00:52:44.396963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.188 [2024-07-26 00:52:44.396990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.188 [2024-07-26 00:52:44.409033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.188 [2024-07-26 00:52:44.409072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.188 [2024-07-26 00:52:44.420253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.188 [2024-07-26 00:52:44.420282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.188 [2024-07-26 00:52:44.431641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.188 [2024-07-26 00:52:44.431673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.188 [2024-07-26 00:52:44.442943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.188 [2024-07-26 00:52:44.442975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.188 [2024-07-26 00:52:44.456194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.188 [2024-07-26 00:52:44.456223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.188 [2024-07-26 00:52:44.466450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.188 [2024-07-26 00:52:44.466478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.188 [2024-07-26 00:52:44.477635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.188 [2024-07-26 00:52:44.477668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.188 [2024-07-26 00:52:44.488724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.188 [2024-07-26 00:52:44.488756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.188 [2024-07-26 00:52:44.499996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.188 [2024-07-26 00:52:44.500027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.188 [2024-07-26 00:52:44.511368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.188 [2024-07-26 00:52:44.511395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.188 [2024-07-26 00:52:44.522874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.188 [2024-07-26 00:52:44.522904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.188 [2024-07-26 00:52:44.533709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.188 [2024-07-26 00:52:44.533750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.188 [2024-07-26 00:52:44.544842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.188 [2024-07-26 00:52:44.544873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.188 [2024-07-26 00:52:44.557829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.188 [2024-07-26 00:52:44.557859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.188 [2024-07-26 00:52:44.570088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.188 [2024-07-26 00:52:44.570116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.188 [2024-07-26 00:52:44.580783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.188 [2024-07-26 00:52:44.580813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.188 [2024-07-26 00:52:44.592367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.188 [2024-07-26 00:52:44.592394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.188 [2024-07-26 00:52:44.603488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.188 [2024-07-26 00:52:44.603515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.448 [2024-07-26 00:52:44.614900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.448 [2024-07-26 00:52:44.614932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.448 [2024-07-26 00:52:44.626303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.448 [2024-07-26 00:52:44.626331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.448 [2024-07-26 00:52:44.639380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.448 [2024-07-26 00:52:44.639408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.448 [2024-07-26 00:52:44.649902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.448 [2024-07-26 00:52:44.649934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.448 [2024-07-26 00:52:44.661079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.448 [2024-07-26 00:52:44.661106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.448 [2024-07-26 00:52:44.672212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.448 [2024-07-26 00:52:44.672241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.448 [2024-07-26 00:52:44.683842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.448 [2024-07-26 00:52:44.683872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.448 [2024-07-26 00:52:44.695893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.448 [2024-07-26 00:52:44.695924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.448 [2024-07-26 00:52:44.706891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.448 [2024-07-26 00:52:44.706935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.448 [2024-07-26 00:52:44.718257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.448 [2024-07-26 00:52:44.718285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.448 [2024-07-26 00:52:44.728831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.448 [2024-07-26 00:52:44.728862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.448 [2024-07-26 00:52:44.740270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.448 [2024-07-26 00:52:44.740298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.448 [2024-07-26 00:52:44.753233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.448 [2024-07-26 00:52:44.753260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.448 [2024-07-26 00:52:44.763306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.448 [2024-07-26 00:52:44.763334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.448 [2024-07-26 00:52:44.774984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.448 [2024-07-26 00:52:44.775014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.448 [2024-07-26 00:52:44.786032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.448 [2024-07-26 00:52:44.786068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.448 [2024-07-26 00:52:44.796998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.448 [2024-07-26 00:52:44.797026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.448 [2024-07-26 00:52:44.809996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.448 [2024-07-26 00:52:44.810027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.448 [2024-07-26 00:52:44.820599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.448 [2024-07-26 00:52:44.820630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.448 [2024-07-26 00:52:44.831774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.448 [2024-07-26 00:52:44.831805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.448 [2024-07-26 00:52:44.844789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.448 [2024-07-26 00:52:44.844820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.448 [2024-07-26 00:52:44.854360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.448 [2024-07-26 00:52:44.854387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.448 [2024-07-26 00:52:44.866848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.448 [2024-07-26 00:52:44.866879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.707 [2024-07-26 00:52:44.878157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.707 [2024-07-26 00:52:44.878189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.707 [2024-07-26 00:52:44.889302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.707 [2024-07-26 00:52:44.889330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.707 [2024-07-26 00:52:44.900413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.707 [2024-07-26 00:52:44.900440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.707 [2024-07-26 00:52:44.912307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.707 [2024-07-26 00:52:44.912335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.707 [2024-07-26 00:52:44.924145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.707 [2024-07-26 00:52:44.924172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.707 [2024-07-26 00:52:44.935281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.707 [2024-07-26 00:52:44.935310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.707 [2024-07-26 00:52:44.946659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.707 [2024-07-26 00:52:44.946690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.707 [2024-07-26 00:52:44.960035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.707 [2024-07-26 00:52:44.960079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.708 [2024-07-26 00:52:44.970974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.708 [2024-07-26 00:52:44.971007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.708 [2024-07-26 00:52:44.982178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.708 [2024-07-26 00:52:44.982206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.708 [2024-07-26 00:52:44.995414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.708 [2024-07-26 00:52:44.995447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.708 [2024-07-26 00:52:45.005193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.708 [2024-07-26 00:52:45.005221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.708 [2024-07-26 00:52:45.016915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.708 [2024-07-26 00:52:45.016946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.708 [2024-07-26 00:52:45.028242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.708 [2024-07-26 00:52:45.028270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.708 [2024-07-26 00:52:45.041219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.708 [2024-07-26 00:52:45.041247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.708 [2024-07-26 00:52:45.051433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.708 [2024-07-26 00:52:45.051460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.708 [2024-07-26 00:52:45.062775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.708 [2024-07-26 00:52:45.062807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.708 [2024-07-26 00:52:45.074108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.708 [2024-07-26 00:52:45.074136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.708 [2024-07-26 00:52:45.085402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.708 [2024-07-26 00:52:45.085430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.708 [2024-07-26 00:52:45.096390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.708 [2024-07-26 00:52:45.096435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.708 [2024-07-26 00:52:45.107439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.708 [2024-07-26 00:52:45.107468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.708 [2024-07-26 00:52:45.120874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.708 [2024-07-26 00:52:45.120902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.708 [2024-07-26 00:52:45.131676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.708 [2024-07-26 00:52:45.131707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.966 [2024-07-26 00:52:45.143085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.966 [2024-07-26 00:52:45.143114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.966 [2024-07-26 00:52:45.156131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.966 [2024-07-26 00:52:45.156159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.966 [2024-07-26 00:52:45.166897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.966 [2024-07-26 00:52:45.166928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.966 [2024-07-26 00:52:45.178113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.966 [2024-07-26 00:52:45.178140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.966 [2024-07-26 00:52:45.189359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.966 [2024-07-26 00:52:45.189387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.966 [2024-07-26 00:52:45.200433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.966 [2024-07-26 00:52:45.200460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.966 [2024-07-26 00:52:45.213311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.966 [2024-07-26 00:52:45.213338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.966 [2024-07-26 00:52:45.224256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.966 [2024-07-26 00:52:45.224283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.966 [2024-07-26 00:52:45.235459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.966 [2024-07-26 00:52:45.235490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.966 [2024-07-26 00:52:45.248647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.966 [2024-07-26 00:52:45.248678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.966 [2024-07-26 00:52:45.260007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.966 [2024-07-26 00:52:45.260038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.966 [2024-07-26 00:52:45.271480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.966 [2024-07-26 00:52:45.271511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.966 [2024-07-26 00:52:45.282554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.966 [2024-07-26 00:52:45.282582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.966 [2024-07-26 00:52:45.293404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.966 [2024-07-26 00:52:45.293432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.966 [2024-07-26 00:52:45.304834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.966 [2024-07-26 00:52:45.304862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.966 [2024-07-26 00:52:45.315876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.966 [2024-07-26 00:52:45.315906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.966 [2024-07-26 00:52:45.327323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.966 [2024-07-26 00:52:45.327350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.966 [2024-07-26 00:52:45.338260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.966 [2024-07-26 00:52:45.338288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.966 [2024-07-26 00:52:45.350817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.966 [2024-07-26 00:52:45.350845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.966 [2024-07-26 00:52:45.360683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.966 [2024-07-26 00:52:45.360711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.966 [2024-07-26 00:52:45.371900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.966 [2024-07-26 00:52:45.371929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.966 [2024-07-26 00:52:45.384589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.966 [2024-07-26 00:52:45.384617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.225 [2024-07-26 00:52:45.395136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.225 [2024-07-26 00:52:45.395183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.225 [2024-07-26 00:52:45.405837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.225 [2024-07-26 00:52:45.405867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.225 [2024-07-26 00:52:45.416724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.225 [2024-07-26 00:52:45.416752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.225 [2024-07-26 00:52:45.427836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.225 [2024-07-26 00:52:45.427864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.225 [2024-07-26 00:52:45.440816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.225 [2024-07-26 00:52:45.440845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.225 [2024-07-26 00:52:45.451200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.225 [2024-07-26 00:52:45.451227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.225 [2024-07-26 00:52:45.462728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.225 [2024-07-26 00:52:45.462755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.225 [2024-07-26 00:52:45.475999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.225 [2024-07-26 00:52:45.476026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.225 [2024-07-26 00:52:45.486765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.225 [2024-07-26 00:52:45.486795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.225 [2024-07-26 00:52:45.498046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.225 [2024-07-26 00:52:45.498084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.225 [2024-07-26 00:52:45.511187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.225 [2024-07-26 00:52:45.511215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.225 [2024-07-26 00:52:45.521699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.225 [2024-07-26 00:52:45.521730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.225 [2024-07-26 00:52:45.533550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.225 [2024-07-26 00:52:45.533582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.225 [2024-07-26 00:52:45.544771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.225 [2024-07-26 00:52:45.544799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.225 [2024-07-26 00:52:45.556531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.225 [2024-07-26 00:52:45.556561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.225 [2024-07-26 00:52:45.567748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.225 [2024-07-26 00:52:45.567779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.225 [2024-07-26 00:52:45.579139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.225 [2024-07-26 00:52:45.579167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.225 [2024-07-26 00:52:45.591037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.225 [2024-07-26 00:52:45.591077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.225 [2024-07-26 00:52:45.602147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.225 [2024-07-26 00:52:45.602175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.225 [2024-07-26 00:52:45.613504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.225 [2024-07-26 00:52:45.613544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.225 [2024-07-26 00:52:45.624709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.225 [2024-07-26 00:52:45.624740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.225 [2024-07-26 00:52:45.638012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.225 [2024-07-26 00:52:45.638042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.225 [2024-07-26 00:52:45.648619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.225 [2024-07-26 00:52:45.648651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.485 [2024-07-26 00:52:45.659836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.485 [2024-07-26 00:52:45.659869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.485 [2024-07-26 00:52:45.671272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.485 [2024-07-26 00:52:45.671299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.485 [2024-07-26 00:52:45.682998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.485 [2024-07-26 00:52:45.683028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.485 [2024-07-26 00:52:45.694514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.485 [2024-07-26 00:52:45.694545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.485 [2024-07-26 00:52:45.705789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.485 [2024-07-26 00:52:45.705819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.485 [2024-07-26 00:52:45.717051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.485 [2024-07-26 00:52:45.717087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.485 [2024-07-26 00:52:45.728339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.485 [2024-07-26 00:52:45.728369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.485 [2024-07-26 00:52:45.739756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.485 [2024-07-26 00:52:45.739787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.485 [2024-07-26 00:52:45.750784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.485 [2024-07-26 00:52:45.750814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.485 [2024-07-26 00:52:45.761998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.485 [2024-07-26 00:52:45.762025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.485 [2024-07-26 00:52:45.775879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.485 [2024-07-26 00:52:45.775910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.486 [2024-07-26 00:52:45.786736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.486 [2024-07-26 00:52:45.786766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.486 [2024-07-26 00:52:45.797594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.486 [2024-07-26 00:52:45.797625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.486 [2024-07-26 00:52:45.810627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.486 [2024-07-26 00:52:45.810658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.486 [2024-07-26 00:52:45.820905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.486 [2024-07-26 00:52:45.820936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.486 [2024-07-26 00:52:45.831874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.486 [2024-07-26 00:52:45.831915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.486 [2024-07-26 00:52:45.842702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.486 [2024-07-26 00:52:45.842731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.486 [2024-07-26 00:52:45.853697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.486 [2024-07-26 00:52:45.853725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.486 [2024-07-26 00:52:45.864625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.486 [2024-07-26 00:52:45.864653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.486 [2024-07-26 00:52:45.875457] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.486 [2024-07-26 00:52:45.875484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.486 [2024-07-26 00:52:45.886442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.486 [2024-07-26 00:52:45.886470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.486 [2024-07-26 00:52:45.897422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.486 [2024-07-26 00:52:45.897453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.486 [2024-07-26 00:52:45.910610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.486 [2024-07-26 00:52:45.910638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.744 [2024-07-26 00:52:45.920962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.744 [2024-07-26 00:52:45.920994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.744 [2024-07-26 00:52:45.932701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.744 [2024-07-26 00:52:45.932732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.744 [2024-07-26 00:52:45.944605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.744 [2024-07-26 00:52:45.944637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.744 [2024-07-26 00:52:45.955656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.744 [2024-07-26 00:52:45.955689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.744 [2024-07-26 00:52:45.966682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.744 [2024-07-26 00:52:45.966713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.744 [2024-07-26 00:52:45.978090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.744 [2024-07-26 00:52:45.978118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.744 [2024-07-26 00:52:45.988840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.744 [2024-07-26 00:52:45.988871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.744 [2024-07-26 00:52:46.001881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.744 [2024-07-26 00:52:46.001912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.744 [2024-07-26 00:52:46.012377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.744 [2024-07-26 00:52:46.012405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.744 [2024-07-26 00:52:46.023152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.744 [2024-07-26 00:52:46.023179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.744 [2024-07-26 00:52:46.038325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.744 [2024-07-26 00:52:46.038358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.744 [2024-07-26 00:52:46.047917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.744 [2024-07-26 00:52:46.047956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.744 [2024-07-26 00:52:46.059975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.744 [2024-07-26 00:52:46.060003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.744 [2024-07-26 00:52:46.073017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.744 [2024-07-26 00:52:46.073045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.744 [2024-07-26 00:52:46.082891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.744 [2024-07-26 00:52:46.082920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.744 [2024-07-26 00:52:46.094628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.744 [2024-07-26 00:52:46.094656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.744 [2024-07-26 00:52:46.105683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.744 [2024-07-26 00:52:46.105711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.744 [2024-07-26 00:52:46.116884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.744 [2024-07-26 00:52:46.116916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.744 [2024-07-26 00:52:46.128183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.744 [2024-07-26 00:52:46.128211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.744 [2024-07-26 00:52:46.138828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.744 [2024-07-26 00:52:46.138856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.744 [2024-07-26 00:52:46.149927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.744 [2024-07-26 00:52:46.149955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.744 [2024-07-26 00:52:46.161106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.744 [2024-07-26 00:52:46.161134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.003 [2024-07-26 00:52:46.172049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.003 [2024-07-26 00:52:46.172086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.003 [2024-07-26 00:52:46.183169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.003 [2024-07-26 00:52:46.183197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.003 [2024-07-26 00:52:46.196561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.003 [2024-07-26 00:52:46.196592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.003 [2024-07-26 00:52:46.207385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.003 [2024-07-26 00:52:46.207416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.003 [2024-07-26 00:52:46.218450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.003 [2024-07-26 00:52:46.218477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.003 [2024-07-26 00:52:46.229408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.003 [2024-07-26 00:52:46.229439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.003 [2024-07-26 00:52:46.240465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.003 [2024-07-26 00:52:46.240494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.003 [2024-07-26 00:52:46.253343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.003 [2024-07-26 00:52:46.253375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.003 [2024-07-26 00:52:46.264234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.003 [2024-07-26 00:52:46.264270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.003 [2024-07-26 00:52:46.275341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.003 [2024-07-26 00:52:46.275370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.003 [2024-07-26 00:52:46.286377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.003 [2024-07-26 00:52:46.286404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.003 [2024-07-26 00:52:46.297196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.003 [2024-07-26 00:52:46.297224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.003 [2024-07-26 00:52:46.308109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.003 [2024-07-26 00:52:46.308145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.003 [2024-07-26 00:52:46.319222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.003 [2024-07-26 00:52:46.319250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.003 [2024-07-26 00:52:46.332356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.003 [2024-07-26 00:52:46.332383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.003 [2024-07-26 00:52:46.342873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.003 [2024-07-26 00:52:46.342904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.003 [2024-07-26 00:52:46.354240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.003 [2024-07-26 00:52:46.354268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.003 [2024-07-26 00:52:46.365558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.003 [2024-07-26 00:52:46.365586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.003 [2024-07-26 00:52:46.376947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.003 [2024-07-26 00:52:46.376974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.003 [2024-07-26 00:52:46.387868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.003 [2024-07-26 00:52:46.387896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.003 [2024-07-26 00:52:46.398843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.003 [2024-07-26 00:52:46.398871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.003 [2024-07-26 00:52:46.411717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.003 [2024-07-26 00:52:46.411748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.003 [2024-07-26 00:52:46.422212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.003 [2024-07-26 00:52:46.422240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.262 [2024-07-26 00:52:46.433197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.262 [2024-07-26 00:52:46.433230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.262 [2024-07-26 00:52:46.444550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.262 [2024-07-26 00:52:46.444578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.262 [2024-07-26 00:52:46.455670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.262 [2024-07-26 00:52:46.455696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.262 [2024-07-26 00:52:46.463039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.262 [2024-07-26 00:52:46.463085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.262 00:10:16.262 Latency(us) 00:10:16.262 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:16.262 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:16.262 Nvme1n1 : 5.01 11370.01 88.83 0.00 0.00 11242.06 4854.52 21068.61 00:10:16.262 =================================================================================================================== 00:10:16.262 Total : 11370.01 88.83 0.00 0.00 11242.06 4854.52 21068.61 00:10:16.262 [2024-07-26 00:52:46.471084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.262 [2024-07-26 00:52:46.471128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.262 [2024-07-26 00:52:46.479099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.262 [2024-07-26 00:52:46.479139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.262 [2024-07-26 00:52:46.487178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.262 [2024-07-26 00:52:46.487231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.262 [2024-07-26 00:52:46.495195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.262 [2024-07-26 00:52:46.495247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.262 [2024-07-26 00:52:46.503219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.262 [2024-07-26 00:52:46.503266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.262 [2024-07-26 00:52:46.511243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.262 [2024-07-26 00:52:46.511291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.262 [2024-07-26 00:52:46.519257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.262 [2024-07-26 00:52:46.519305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.262 [2024-07-26 00:52:46.527292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.262 [2024-07-26 00:52:46.527342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.262 [2024-07-26 00:52:46.535308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.262 [2024-07-26 00:52:46.535358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.262 [2024-07-26 00:52:46.543327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.262 [2024-07-26 00:52:46.543374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.262 [2024-07-26 00:52:46.551342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.262 [2024-07-26 00:52:46.551386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.262 [2024-07-26 00:52:46.559384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.262 [2024-07-26 00:52:46.559433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.262 [2024-07-26 00:52:46.567400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.262 [2024-07-26 00:52:46.567451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.262 [2024-07-26 00:52:46.575425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.262 [2024-07-26 00:52:46.575472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.262 [2024-07-26 00:52:46.583445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.262 [2024-07-26 00:52:46.583492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.262 [2024-07-26 00:52:46.591451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.262 [2024-07-26 00:52:46.591496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.262 [2024-07-26 00:52:46.599485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.262 [2024-07-26 00:52:46.599533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.262 [2024-07-26 00:52:46.607486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.262 [2024-07-26 00:52:46.607527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.262 [2024-07-26 00:52:46.615483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.262 [2024-07-26 00:52:46.615512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.262 [2024-07-26 00:52:46.623551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.262 [2024-07-26 00:52:46.623594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.262 [2024-07-26 00:52:46.631568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.262 [2024-07-26 00:52:46.631629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.262 [2024-07-26 00:52:46.639610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.262 [2024-07-26 00:52:46.639669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.262 [2024-07-26 00:52:46.647573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.262 [2024-07-26 00:52:46.647602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.262 [2024-07-26 00:52:46.655599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.262 [2024-07-26 00:52:46.655630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.262 [2024-07-26 00:52:46.663664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.262 [2024-07-26 00:52:46.663713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.262 [2024-07-26 00:52:46.671703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.262 [2024-07-26 00:52:46.671752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.262 [2024-07-26 00:52:46.679657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.262 [2024-07-26 00:52:46.679684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.262 [2024-07-26 00:52:46.687678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.262 [2024-07-26 00:52:46.687704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.522 [2024-07-26 00:52:46.695700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.522 [2024-07-26 00:52:46.695729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.522 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1744277) - No such process 00:10:16.522 00:52:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1744277 00:10:16.522 00:52:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.522 00:52:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.522 00:52:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:16.522 00:52:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.522 00:52:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:16.522 00:52:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.522 00:52:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:16.522 delay0 00:10:16.522 00:52:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.522 00:52:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:16.522 00:52:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.522 00:52:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:16.522 00:52:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.522 00:52:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:16.522 EAL: No free 2048 kB hugepages reported on node 1 00:10:16.522 [2024-07-26 00:52:46.811970] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:23.137 Initializing NVMe Controllers 00:10:23.137 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:23.137 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:23.137 Initialization complete. Launching workers. 00:10:23.137 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 77 00:10:23.137 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 364, failed to submit 33 00:10:23.137 success 188, unsuccess 176, failed 0 00:10:23.137 00:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:23.137 00:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:23.137 00:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:23.137 00:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:10:23.137 00:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:23.137 00:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:10:23.137 00:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:23.137 00:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:23.137 rmmod nvme_tcp 00:10:23.137 rmmod nvme_fabrics 00:10:23.137 rmmod nvme_keyring 00:10:23.137 00:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:23.137 00:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:10:23.137 00:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:10:23.137 00:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1742967 ']' 00:10:23.137 00:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1742967 00:10:23.137 00:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1742967 ']' 00:10:23.137 00:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1742967 00:10:23.137 00:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:10:23.137 00:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:23.137 00:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1742967 00:10:23.137 00:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:23.137 00:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:23.137 00:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1742967' 00:10:23.137 killing process with pid 1742967 00:10:23.137 00:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1742967 00:10:23.137 00:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1742967 00:10:23.137 00:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:23.137 00:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:23.137 00:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:23.137 00:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:23.137 00:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:23.137 00:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.137 00:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:23.137 00:52:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:25.044 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:25.044 00:10:25.044 real 0m27.671s 00:10:25.044 user 0m40.941s 00:10:25.044 sys 0m8.156s 00:10:25.044 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:25.044 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:25.044 ************************************ 00:10:25.044 END TEST nvmf_zcopy 00:10:25.044 ************************************ 00:10:25.044 00:52:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:25.044 00:52:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:25.044 00:52:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:25.044 00:52:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:25.044 ************************************ 00:10:25.044 START TEST nvmf_nmic 00:10:25.044 ************************************ 00:10:25.044 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:25.044 * Looking for test storage... 00:10:25.044 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:25.044 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:25.044 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:25.044 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:25.044 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:25.044 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:25.044 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:25.044 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:25.044 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:25.044 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:25.044 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:25.044 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:25.302 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:25.302 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:25.302 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:25.302 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:25.302 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:25.302 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:25.302 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:25.302 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:25.302 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:25.302 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:25.302 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:25.302 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.302 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.302 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.302 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:25.302 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.302 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:10:25.302 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:25.302 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:25.302 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:25.302 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:25.302 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:25.302 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:25.302 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:25.302 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:25.302 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:25.302 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:25.303 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:25.303 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:25.303 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:25.303 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:25.303 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:25.303 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:25.303 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.303 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:25.303 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:25.303 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:25.303 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:25.303 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:10:25.303 00:52:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:27.205 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:27.205 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:10:27.205 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:27.205 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:27.205 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:27.205 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:27.205 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:27.205 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:10:27.205 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:27.205 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:10:27.205 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:10:27.205 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:10:27.205 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:10:27.205 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:10:27.205 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:10:27.205 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:27.205 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:27.205 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:27.205 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:27.206 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:27.206 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:27.206 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:27.206 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:27.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:27.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:10:27.206 00:10:27.206 --- 10.0.0.2 ping statistics --- 00:10:27.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.206 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:27.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:27.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:10:27.206 00:10:27.206 --- 10.0.0.1 ping statistics --- 00:10:27.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.206 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1747575 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1747575 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1747575 ']' 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:27.206 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:27.206 [2024-07-26 00:52:57.591158] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:10:27.206 [2024-07-26 00:52:57.591242] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:27.206 EAL: No free 2048 kB hugepages reported on node 1 00:10:27.465 [2024-07-26 00:52:57.658677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:27.465 [2024-07-26 00:52:57.751121] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:27.465 [2024-07-26 00:52:57.751186] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:27.465 [2024-07-26 00:52:57.751202] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:27.465 [2024-07-26 00:52:57.751216] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:27.465 [2024-07-26 00:52:57.751227] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:27.465 [2024-07-26 00:52:57.751284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:27.465 [2024-07-26 00:52:57.751356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:27.465 [2024-07-26 00:52:57.751383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:27.465 [2024-07-26 00:52:57.751385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.465 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:27.465 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:10:27.465 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:27.465 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:27.465 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:27.723 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:27.723 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:27.723 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.723 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:27.723 [2024-07-26 00:52:57.907530] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:27.723 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.723 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:27.723 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.723 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:27.723 Malloc0 00:10:27.723 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.723 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:27.723 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.723 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:27.723 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.723 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:27.723 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.723 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:27.723 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.723 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:27.723 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.723 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:27.723 [2024-07-26 00:52:57.959804] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:27.723 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.723 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:27.723 test case1: single bdev can't be used in multiple subsystems 00:10:27.723 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:27.723 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.723 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:27.723 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.723 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:27.723 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.723 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:27.723 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.723 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:27.723 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:27.723 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.723 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:27.723 [2024-07-26 00:52:57.983651] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:27.723 [2024-07-26 00:52:57.983679] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:27.723 [2024-07-26 00:52:57.983694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.723 request: 00:10:27.723 { 00:10:27.723 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:27.723 "namespace": { 00:10:27.723 "bdev_name": "Malloc0", 00:10:27.723 "no_auto_visible": false 00:10:27.723 }, 00:10:27.723 "method": "nvmf_subsystem_add_ns", 00:10:27.723 "req_id": 1 00:10:27.723 } 00:10:27.723 Got JSON-RPC error response 00:10:27.723 response: 00:10:27.723 { 00:10:27.723 "code": -32602, 00:10:27.723 "message": "Invalid parameters" 00:10:27.723 } 00:10:27.723 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:27.723 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:27.723 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:27.724 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:27.724 Adding namespace failed - expected result. 00:10:27.724 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:27.724 test case2: host connect to nvmf target in multiple paths 00:10:27.724 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:27.724 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.724 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:27.724 [2024-07-26 00:52:57.991758] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:27.724 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.724 00:52:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:28.290 00:52:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:29.225 00:52:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:29.225 00:52:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:29.225 00:52:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:29.225 00:52:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:29.225 00:52:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:31.133 00:53:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:31.133 00:53:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:31.133 00:53:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:31.133 00:53:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:31.133 00:53:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:31.133 00:53:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:31.133 00:53:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:31.133 [global] 00:10:31.133 thread=1 00:10:31.133 invalidate=1 00:10:31.133 rw=write 00:10:31.133 time_based=1 00:10:31.133 runtime=1 00:10:31.133 ioengine=libaio 00:10:31.133 direct=1 00:10:31.133 bs=4096 00:10:31.133 iodepth=1 00:10:31.133 norandommap=0 00:10:31.133 numjobs=1 00:10:31.133 00:10:31.133 verify_dump=1 00:10:31.133 verify_backlog=512 00:10:31.133 verify_state_save=0 00:10:31.133 do_verify=1 00:10:31.133 verify=crc32c-intel 00:10:31.133 [job0] 00:10:31.133 filename=/dev/nvme0n1 00:10:31.133 Could not set queue depth (nvme0n1) 00:10:31.133 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.133 fio-3.35 00:10:31.133 Starting 1 thread 00:10:32.511 00:10:32.511 job0: (groupid=0, jobs=1): err= 0: pid=1748214: Fri Jul 26 00:53:02 2024 00:10:32.511 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:32.511 slat (nsec): min=6745, max=68732, avg=11517.53, stdev=5488.73 00:10:32.511 clat (usec): min=248, max=41081, avg=346.18, stdev=1041.94 00:10:32.511 lat (usec): min=256, max=41088, avg=357.70, stdev=1042.00 00:10:32.511 clat percentiles (usec): 00:10:32.511 | 1.00th=[ 260], 5.00th=[ 265], 10.00th=[ 273], 20.00th=[ 281], 00:10:32.511 | 30.00th=[ 289], 40.00th=[ 297], 50.00th=[ 310], 60.00th=[ 318], 00:10:32.511 | 70.00th=[ 334], 80.00th=[ 351], 90.00th=[ 371], 95.00th=[ 392], 00:10:32.511 | 99.00th=[ 486], 99.50th=[ 519], 99.90th=[ 1958], 99.95th=[41157], 00:10:32.511 | 99.99th=[41157] 00:10:32.511 write: IOPS=1873, BW=7493KiB/s (7672kB/s)(7500KiB/1001msec); 0 zone resets 00:10:32.511 slat (usec): min=8, max=29254, avg=29.93, stdev=675.32 00:10:32.511 clat (usec): min=154, max=955, avg=203.47, stdev=35.60 00:10:32.511 lat (usec): min=164, max=29623, avg=233.39, stdev=680.19 00:10:32.511 clat percentiles (usec): 00:10:32.511 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 176], 00:10:32.511 | 30.00th=[ 190], 40.00th=[ 198], 50.00th=[ 204], 60.00th=[ 208], 00:10:32.511 | 70.00th=[ 215], 80.00th=[ 221], 90.00th=[ 235], 95.00th=[ 249], 00:10:32.511 | 99.00th=[ 302], 99.50th=[ 338], 99.90th=[ 783], 99.95th=[ 955], 00:10:32.511 | 99.99th=[ 955] 00:10:32.511 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:10:32.511 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:32.511 lat (usec) : 250=52.51%, 500=47.05%, 750=0.29%, 1000=0.09% 00:10:32.511 lat (msec) : 2=0.03%, 50=0.03% 00:10:32.511 cpu : usr=3.30%, sys=6.10%, ctx=3414, majf=0, minf=2 00:10:32.511 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:32.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.511 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.511 issued rwts: total=1536,1875,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.511 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:32.511 00:10:32.511 Run status group 0 (all jobs): 00:10:32.511 READ: bw=6138KiB/s (6285kB/s), 6138KiB/s-6138KiB/s (6285kB/s-6285kB/s), io=6144KiB (6291kB), run=1001-1001msec 00:10:32.511 WRITE: bw=7493KiB/s (7672kB/s), 7493KiB/s-7493KiB/s (7672kB/s-7672kB/s), io=7500KiB (7680kB), run=1001-1001msec 00:10:32.511 00:10:32.511 Disk stats (read/write): 00:10:32.511 nvme0n1: ios=1496/1536, merge=0/0, ticks=781/298, in_queue=1079, util=98.70% 00:10:32.511 00:53:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:32.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:32.511 00:53:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:32.511 00:53:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:32.511 00:53:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:32.511 00:53:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:32.511 00:53:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:32.511 00:53:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:32.511 00:53:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:32.511 00:53:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:32.511 00:53:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:32.511 00:53:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:32.511 00:53:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:10:32.511 00:53:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:32.511 00:53:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:10:32.511 00:53:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:32.511 00:53:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:32.511 rmmod nvme_tcp 00:10:32.511 rmmod nvme_fabrics 00:10:32.511 rmmod nvme_keyring 00:10:32.511 00:53:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:32.511 00:53:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:10:32.511 00:53:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:10:32.511 00:53:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1747575 ']' 00:10:32.511 00:53:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1747575 00:10:32.511 00:53:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1747575 ']' 00:10:32.511 00:53:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1747575 00:10:32.511 00:53:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:32.511 00:53:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:32.512 00:53:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1747575 00:10:32.512 00:53:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:32.512 00:53:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:32.512 00:53:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1747575' 00:10:32.512 killing process with pid 1747575 00:10:32.512 00:53:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1747575 00:10:32.512 00:53:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1747575 00:10:32.770 00:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:32.770 00:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:32.770 00:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:32.770 00:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:32.770 00:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:32.770 00:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.770 00:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.770 00:53:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:35.304 00:10:35.304 real 0m9.819s 00:10:35.304 user 0m22.456s 00:10:35.304 sys 0m2.356s 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.304 ************************************ 00:10:35.304 END TEST nvmf_nmic 00:10:35.304 ************************************ 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:35.304 ************************************ 00:10:35.304 START TEST nvmf_fio_target 00:10:35.304 ************************************ 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:35.304 * Looking for test storage... 00:10:35.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:35.304 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:35.305 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:35.305 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:35.305 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:35.305 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:35.305 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:35.305 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:35.305 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:35.305 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:35.305 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:35.305 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.305 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.305 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.305 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:35.305 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:35.305 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:10:35.305 00:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:37.205 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:37.205 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:37.205 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:37.205 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:37.205 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:37.206 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:37.206 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:37.206 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:37.206 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:37.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:37.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:10:37.206 00:10:37.206 --- 10.0.0.2 ping statistics --- 00:10:37.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.206 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:10:37.206 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:37.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:37.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:10:37.206 00:10:37.206 --- 10.0.0.1 ping statistics --- 00:10:37.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.206 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:10:37.206 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:37.206 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:10:37.206 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:37.206 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:37.206 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:37.206 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:37.206 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:37.206 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:37.206 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:37.206 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:37.206 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:37.206 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:37.206 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.206 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1750292 00:10:37.206 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:37.206 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1750292 00:10:37.206 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1750292 ']' 00:10:37.206 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.206 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:37.206 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.206 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:37.206 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.466 [2024-07-26 00:53:07.651420] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:10:37.466 [2024-07-26 00:53:07.651498] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.466 EAL: No free 2048 kB hugepages reported on node 1 00:10:37.466 [2024-07-26 00:53:07.719343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:37.466 [2024-07-26 00:53:07.812849] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:37.466 [2024-07-26 00:53:07.812913] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:37.466 [2024-07-26 00:53:07.812930] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:37.466 [2024-07-26 00:53:07.812943] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:37.466 [2024-07-26 00:53:07.812962] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:37.466 [2024-07-26 00:53:07.813024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.466 [2024-07-26 00:53:07.813123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:37.466 [2024-07-26 00:53:07.813125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.466 [2024-07-26 00:53:07.813094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:37.724 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:37.724 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:37.724 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:37.724 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:37.724 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.724 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:37.724 00:53:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:37.982 [2024-07-26 00:53:08.234587] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:37.982 00:53:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:38.240 00:53:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:38.240 00:53:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:38.498 00:53:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:38.498 00:53:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:38.756 00:53:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:38.756 00:53:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:39.014 00:53:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:39.014 00:53:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:39.271 00:53:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:39.530 00:53:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:39.530 00:53:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:39.788 00:53:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:39.788 00:53:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:40.046 00:53:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:40.046 00:53:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:40.304 00:53:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:40.562 00:53:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:40.562 00:53:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:40.825 00:53:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:40.826 00:53:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:41.132 00:53:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:41.390 [2024-07-26 00:53:11.614405] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:41.390 00:53:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:41.648 00:53:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:41.906 00:53:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:42.476 00:53:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:42.476 00:53:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:42.476 00:53:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:42.476 00:53:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:42.476 00:53:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:42.476 00:53:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:44.377 00:53:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:44.377 00:53:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:44.377 00:53:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:44.377 00:53:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:44.377 00:53:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:44.377 00:53:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:44.377 00:53:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:44.377 [global] 00:10:44.377 thread=1 00:10:44.377 invalidate=1 00:10:44.377 rw=write 00:10:44.377 time_based=1 00:10:44.377 runtime=1 00:10:44.377 ioengine=libaio 00:10:44.377 direct=1 00:10:44.377 bs=4096 00:10:44.377 iodepth=1 00:10:44.377 norandommap=0 00:10:44.377 numjobs=1 00:10:44.377 00:10:44.377 verify_dump=1 00:10:44.377 verify_backlog=512 00:10:44.377 verify_state_save=0 00:10:44.377 do_verify=1 00:10:44.377 verify=crc32c-intel 00:10:44.377 [job0] 00:10:44.377 filename=/dev/nvme0n1 00:10:44.377 [job1] 00:10:44.377 filename=/dev/nvme0n2 00:10:44.377 [job2] 00:10:44.377 filename=/dev/nvme0n3 00:10:44.377 [job3] 00:10:44.377 filename=/dev/nvme0n4 00:10:44.377 Could not set queue depth (nvme0n1) 00:10:44.377 Could not set queue depth (nvme0n2) 00:10:44.377 Could not set queue depth (nvme0n3) 00:10:44.377 Could not set queue depth (nvme0n4) 00:10:44.635 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:44.635 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:44.635 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:44.635 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:44.635 fio-3.35 00:10:44.635 Starting 4 threads 00:10:46.012 00:10:46.012 job0: (groupid=0, jobs=1): err= 0: pid=1751361: Fri Jul 26 00:53:16 2024 00:10:46.012 read: IOPS=49, BW=200KiB/s (204kB/s)(204KiB/1022msec) 00:10:46.012 slat (nsec): min=9228, max=49341, avg=22155.18, stdev=8197.04 00:10:46.012 clat (usec): min=309, max=43999, avg=17188.01, stdev=20227.76 00:10:46.012 lat (usec): min=319, max=44018, avg=17210.16, stdev=20231.19 00:10:46.012 clat percentiles (usec): 00:10:46.012 | 1.00th=[ 310], 5.00th=[ 347], 10.00th=[ 359], 20.00th=[ 400], 00:10:46.012 | 30.00th=[ 449], 40.00th=[ 486], 50.00th=[ 506], 60.00th=[40633], 00:10:46.012 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:46.012 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:10:46.012 | 99.99th=[43779] 00:10:46.012 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:10:46.012 slat (nsec): min=7060, max=73561, avg=17728.10, stdev=9856.84 00:10:46.012 clat (usec): min=165, max=455, avg=258.28, stdev=47.71 00:10:46.012 lat (usec): min=176, max=503, avg=276.01, stdev=50.17 00:10:46.012 clat percentiles (usec): 00:10:46.012 | 1.00th=[ 202], 5.00th=[ 210], 10.00th=[ 219], 20.00th=[ 229], 00:10:46.012 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 253], 00:10:46.012 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 322], 95.00th=[ 371], 00:10:46.012 | 99.00th=[ 441], 99.50th=[ 445], 99.90th=[ 457], 99.95th=[ 457], 00:10:46.012 | 99.99th=[ 457] 00:10:46.012 bw ( KiB/s): min= 4096, max= 4096, per=29.57%, avg=4096.00, stdev= 0.00, samples=1 00:10:46.012 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:46.012 lat (usec) : 250=50.62%, 500=44.76%, 750=0.89% 00:10:46.012 lat (msec) : 50=3.73% 00:10:46.012 cpu : usr=1.08%, sys=0.78%, ctx=563, majf=0, minf=1 00:10:46.012 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:46.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.012 issued rwts: total=51,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.012 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:46.012 job1: (groupid=0, jobs=1): err= 0: pid=1751369: Fri Jul 26 00:53:16 2024 00:10:46.012 read: IOPS=1485, BW=5944KiB/s (6087kB/s)(6152KiB/1035msec) 00:10:46.012 slat (nsec): min=6131, max=61896, avg=16141.91, stdev=3994.40 00:10:46.012 clat (usec): min=251, max=40994, avg=356.98, stdev=1464.24 00:10:46.012 lat (usec): min=259, max=41009, avg=373.12, stdev=1464.04 00:10:46.012 clat percentiles (usec): 00:10:46.012 | 1.00th=[ 273], 5.00th=[ 285], 10.00th=[ 289], 20.00th=[ 293], 00:10:46.012 | 30.00th=[ 297], 40.00th=[ 302], 50.00th=[ 302], 60.00th=[ 306], 00:10:46.012 | 70.00th=[ 310], 80.00th=[ 314], 90.00th=[ 322], 95.00th=[ 330], 00:10:46.012 | 99.00th=[ 416], 99.50th=[ 437], 99.90th=[40633], 99.95th=[41157], 00:10:46.012 | 99.99th=[41157] 00:10:46.012 write: IOPS=1978, BW=7915KiB/s (8105kB/s)(8192KiB/1035msec); 0 zone resets 00:10:46.013 slat (nsec): min=6828, max=66191, avg=17695.72, stdev=7565.00 00:10:46.013 clat (usec): min=133, max=464, avg=198.46, stdev=37.16 00:10:46.013 lat (usec): min=142, max=491, avg=216.15, stdev=40.09 00:10:46.013 clat percentiles (usec): 00:10:46.013 | 1.00th=[ 141], 5.00th=[ 153], 10.00th=[ 161], 20.00th=[ 176], 00:10:46.013 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 198], 00:10:46.013 | 70.00th=[ 204], 80.00th=[ 217], 90.00th=[ 237], 95.00th=[ 258], 00:10:46.013 | 99.00th=[ 367], 99.50th=[ 404], 99.90th=[ 445], 99.95th=[ 465], 00:10:46.013 | 99.99th=[ 465] 00:10:46.013 bw ( KiB/s): min= 8192, max= 8192, per=59.14%, avg=8192.00, stdev= 0.00, samples=2 00:10:46.013 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:10:46.013 lat (usec) : 250=53.79%, 500=46.15% 00:10:46.013 lat (msec) : 50=0.06% 00:10:46.013 cpu : usr=4.35%, sys=8.22%, ctx=3587, majf=0, minf=1 00:10:46.013 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:46.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.013 issued rwts: total=1538,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.013 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:46.013 job2: (groupid=0, jobs=1): err= 0: pid=1751372: Fri Jul 26 00:53:16 2024 00:10:46.013 read: IOPS=20, BW=81.4KiB/s (83.3kB/s)(84.0KiB/1032msec) 00:10:46.013 slat (nsec): min=14236, max=48421, avg=30184.76, stdev=8517.91 00:10:46.013 clat (usec): min=40904, max=42031, avg=41706.06, stdev=444.61 00:10:46.013 lat (usec): min=40939, max=42046, avg=41736.25, stdev=441.46 00:10:46.013 clat percentiles (usec): 00:10:46.013 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:46.013 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:10:46.013 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:46.013 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:46.013 | 99.99th=[42206] 00:10:46.013 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:10:46.013 slat (nsec): min=5647, max=66934, avg=18705.75, stdev=11787.68 00:10:46.013 clat (usec): min=174, max=611, avg=279.43, stdev=82.35 00:10:46.013 lat (usec): min=186, max=667, avg=298.13, stdev=88.18 00:10:46.013 clat percentiles (usec): 00:10:46.013 | 1.00th=[ 186], 5.00th=[ 194], 10.00th=[ 202], 20.00th=[ 219], 00:10:46.013 | 30.00th=[ 235], 40.00th=[ 243], 50.00th=[ 255], 60.00th=[ 269], 00:10:46.013 | 70.00th=[ 285], 80.00th=[ 318], 90.00th=[ 412], 95.00th=[ 461], 00:10:46.013 | 99.00th=[ 553], 99.50th=[ 586], 99.90th=[ 611], 99.95th=[ 611], 00:10:46.013 | 99.99th=[ 611] 00:10:46.013 bw ( KiB/s): min= 4096, max= 4096, per=29.57%, avg=4096.00, stdev= 0.00, samples=1 00:10:46.013 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:46.013 lat (usec) : 250=44.65%, 500=49.53%, 750=1.88% 00:10:46.013 lat (msec) : 50=3.94% 00:10:46.013 cpu : usr=0.39%, sys=1.26%, ctx=533, majf=0, minf=2 00:10:46.013 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:46.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.013 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.013 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:46.013 job3: (groupid=0, jobs=1): err= 0: pid=1751373: Fri Jul 26 00:53:16 2024 00:10:46.013 read: IOPS=20, BW=82.6KiB/s (84.6kB/s)(84.0KiB/1017msec) 00:10:46.013 slat (nsec): min=6953, max=49728, avg=28946.81, stdev=9906.16 00:10:46.013 clat (usec): min=40871, max=42060, avg=41669.87, stdev=475.09 00:10:46.013 lat (usec): min=40905, max=42076, avg=41698.82, stdev=475.56 00:10:46.013 clat percentiles (usec): 00:10:46.013 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:46.013 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:10:46.013 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:46.013 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:46.013 | 99.99th=[42206] 00:10:46.013 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:10:46.013 slat (usec): min=6, max=17037, avg=47.61, stdev=752.38 00:10:46.013 clat (usec): min=163, max=514, avg=223.92, stdev=33.66 00:10:46.013 lat (usec): min=174, max=17261, avg=271.53, stdev=753.09 00:10:46.013 clat percentiles (usec): 00:10:46.013 | 1.00th=[ 169], 5.00th=[ 182], 10.00th=[ 190], 20.00th=[ 200], 00:10:46.013 | 30.00th=[ 206], 40.00th=[ 215], 50.00th=[ 223], 60.00th=[ 229], 00:10:46.013 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 255], 95.00th=[ 273], 00:10:46.013 | 99.00th=[ 379], 99.50th=[ 379], 99.90th=[ 515], 99.95th=[ 515], 00:10:46.013 | 99.99th=[ 515] 00:10:46.013 bw ( KiB/s): min= 4096, max= 4096, per=29.57%, avg=4096.00, stdev= 0.00, samples=1 00:10:46.013 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:46.013 lat (usec) : 250=83.30%, 500=12.57%, 750=0.19% 00:10:46.013 lat (msec) : 50=3.94% 00:10:46.013 cpu : usr=0.49%, sys=0.59%, ctx=535, majf=0, minf=1 00:10:46.013 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:46.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.013 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.013 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:46.013 00:10:46.013 Run status group 0 (all jobs): 00:10:46.013 READ: bw=6303KiB/s (6455kB/s), 81.4KiB/s-5944KiB/s (83.3kB/s-6087kB/s), io=6524KiB (6681kB), run=1017-1035msec 00:10:46.013 WRITE: bw=13.5MiB/s (14.2MB/s), 1984KiB/s-7915KiB/s (2032kB/s-8105kB/s), io=14.0MiB (14.7MB), run=1017-1035msec 00:10:46.013 00:10:46.013 Disk stats (read/write): 00:10:46.013 nvme0n1: ios=96/512, merge=0/0, ticks=707/129, in_queue=836, util=86.77% 00:10:46.013 nvme0n2: ios=1503/1536, merge=0/0, ticks=1397/303, in_queue=1700, util=97.56% 00:10:46.013 nvme0n3: ios=16/512, merge=0/0, ticks=667/133, in_queue=800, util=88.76% 00:10:46.013 nvme0n4: ios=51/512, merge=0/0, ticks=1547/112, in_queue=1659, util=97.77% 00:10:46.013 00:53:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:46.013 [global] 00:10:46.013 thread=1 00:10:46.013 invalidate=1 00:10:46.013 rw=randwrite 00:10:46.013 time_based=1 00:10:46.013 runtime=1 00:10:46.013 ioengine=libaio 00:10:46.013 direct=1 00:10:46.013 bs=4096 00:10:46.013 iodepth=1 00:10:46.013 norandommap=0 00:10:46.013 numjobs=1 00:10:46.013 00:10:46.013 verify_dump=1 00:10:46.013 verify_backlog=512 00:10:46.013 verify_state_save=0 00:10:46.013 do_verify=1 00:10:46.013 verify=crc32c-intel 00:10:46.013 [job0] 00:10:46.013 filename=/dev/nvme0n1 00:10:46.013 [job1] 00:10:46.013 filename=/dev/nvme0n2 00:10:46.013 [job2] 00:10:46.013 filename=/dev/nvme0n3 00:10:46.013 [job3] 00:10:46.013 filename=/dev/nvme0n4 00:10:46.013 Could not set queue depth (nvme0n1) 00:10:46.013 Could not set queue depth (nvme0n2) 00:10:46.013 Could not set queue depth (nvme0n3) 00:10:46.013 Could not set queue depth (nvme0n4) 00:10:46.013 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.013 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.013 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.013 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.013 fio-3.35 00:10:46.013 Starting 4 threads 00:10:47.391 00:10:47.391 job0: (groupid=0, jobs=1): err= 0: pid=1751605: Fri Jul 26 00:53:17 2024 00:10:47.391 read: IOPS=572, BW=2290KiB/s (2345kB/s)(2292KiB/1001msec) 00:10:47.391 slat (nsec): min=11682, max=47306, avg=17098.16, stdev=3079.91 00:10:47.391 clat (usec): min=253, max=41034, avg=1273.15, stdev=6284.54 00:10:47.391 lat (usec): min=269, max=41049, avg=1290.24, stdev=6284.72 00:10:47.391 clat percentiles (usec): 00:10:47.391 | 1.00th=[ 255], 5.00th=[ 260], 10.00th=[ 265], 20.00th=[ 269], 00:10:47.391 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 277], 00:10:47.391 | 70.00th=[ 281], 80.00th=[ 285], 90.00th=[ 297], 95.00th=[ 359], 00:10:47.391 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:47.391 | 99.99th=[41157] 00:10:47.391 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:47.391 slat (nsec): min=9148, max=46056, avg=20139.26, stdev=5362.50 00:10:47.391 clat (usec): min=170, max=382, avg=226.15, stdev=27.36 00:10:47.391 lat (usec): min=183, max=398, avg=246.29, stdev=27.63 00:10:47.391 clat percentiles (usec): 00:10:47.391 | 1.00th=[ 192], 5.00th=[ 196], 10.00th=[ 198], 20.00th=[ 202], 00:10:47.391 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 219], 60.00th=[ 233], 00:10:47.391 | 70.00th=[ 239], 80.00th=[ 249], 90.00th=[ 262], 95.00th=[ 277], 00:10:47.391 | 99.00th=[ 302], 99.50th=[ 326], 99.90th=[ 338], 99.95th=[ 383], 00:10:47.391 | 99.99th=[ 383] 00:10:47.391 bw ( KiB/s): min= 8192, max= 8192, per=83.12%, avg=8192.00, stdev= 0.00, samples=1 00:10:47.391 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:47.391 lat (usec) : 250=51.97%, 500=47.15% 00:10:47.391 lat (msec) : 50=0.88% 00:10:47.391 cpu : usr=2.20%, sys=4.10%, ctx=1597, majf=0, minf=1 00:10:47.391 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:47.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.391 issued rwts: total=573,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.391 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:47.391 job1: (groupid=0, jobs=1): err= 0: pid=1751606: Fri Jul 26 00:53:17 2024 00:10:47.391 read: IOPS=22, BW=88.5KiB/s (90.7kB/s)(92.0KiB/1039msec) 00:10:47.391 slat (nsec): min=12053, max=41683, avg=23753.22, stdev=10052.87 00:10:47.391 clat (usec): min=384, max=42049, avg=39852.81, stdev=8617.84 00:10:47.391 lat (usec): min=397, max=42065, avg=39876.56, stdev=8620.26 00:10:47.391 clat percentiles (usec): 00:10:47.391 | 1.00th=[ 383], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:47.391 | 30.00th=[41157], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:10:47.391 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:47.391 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:47.391 | 99.99th=[42206] 00:10:47.391 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:10:47.391 slat (nsec): min=9701, max=41819, avg=15576.83, stdev=4503.94 00:10:47.391 clat (usec): min=178, max=2201, avg=216.98, stdev=89.27 00:10:47.391 lat (usec): min=190, max=2215, avg=232.55, stdev=89.38 00:10:47.391 clat percentiles (usec): 00:10:47.391 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 200], 00:10:47.391 | 30.00th=[ 204], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 215], 00:10:47.391 | 70.00th=[ 219], 80.00th=[ 225], 90.00th=[ 235], 95.00th=[ 245], 00:10:47.391 | 99.00th=[ 262], 99.50th=[ 269], 99.90th=[ 2212], 99.95th=[ 2212], 00:10:47.391 | 99.99th=[ 2212] 00:10:47.391 bw ( KiB/s): min= 4096, max= 4096, per=41.56%, avg=4096.00, stdev= 0.00, samples=1 00:10:47.391 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:47.391 lat (usec) : 250=92.34%, 500=3.36% 00:10:47.391 lat (msec) : 4=0.19%, 50=4.11% 00:10:47.391 cpu : usr=0.58%, sys=0.58%, ctx=536, majf=0, minf=2 00:10:47.391 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:47.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.392 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.392 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:47.392 job2: (groupid=0, jobs=1): err= 0: pid=1751607: Fri Jul 26 00:53:17 2024 00:10:47.392 read: IOPS=20, BW=82.9KiB/s (84.9kB/s)(84.0KiB/1013msec) 00:10:47.392 slat (nsec): min=13191, max=44425, avg=25310.10, stdev=10761.34 00:10:47.392 clat (usec): min=40740, max=45012, avg=41529.61, stdev=1422.19 00:10:47.392 lat (usec): min=40759, max=45033, avg=41554.92, stdev=1423.01 00:10:47.392 clat percentiles (usec): 00:10:47.392 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:47.392 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:47.392 | 70.00th=[41157], 80.00th=[41157], 90.00th=[44827], 95.00th=[44827], 00:10:47.392 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:10:47.392 | 99.99th=[44827] 00:10:47.392 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:10:47.392 slat (nsec): min=9767, max=42899, avg=19609.91, stdev=5187.90 00:10:47.392 clat (usec): min=189, max=332, avg=248.14, stdev=20.84 00:10:47.392 lat (usec): min=216, max=350, avg=267.75, stdev=21.09 00:10:47.392 clat percentiles (usec): 00:10:47.392 | 1.00th=[ 206], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 231], 00:10:47.392 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 253], 00:10:47.392 | 70.00th=[ 260], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 281], 00:10:47.392 | 99.00th=[ 306], 99.50th=[ 314], 99.90th=[ 334], 99.95th=[ 334], 00:10:47.392 | 99.99th=[ 334] 00:10:47.392 bw ( KiB/s): min= 4096, max= 4096, per=41.56%, avg=4096.00, stdev= 0.00, samples=1 00:10:47.392 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:47.392 lat (usec) : 250=52.53%, 500=43.53% 00:10:47.392 lat (msec) : 50=3.94% 00:10:47.392 cpu : usr=0.79%, sys=1.19%, ctx=534, majf=0, minf=1 00:10:47.392 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:47.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.392 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.392 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:47.392 job3: (groupid=0, jobs=1): err= 0: pid=1751608: Fri Jul 26 00:53:17 2024 00:10:47.392 read: IOPS=207, BW=830KiB/s (850kB/s)(852KiB/1026msec) 00:10:47.392 slat (nsec): min=7396, max=34481, avg=11262.19, stdev=5676.04 00:10:47.392 clat (usec): min=235, max=42022, avg=4106.81, stdev=11966.39 00:10:47.392 lat (usec): min=243, max=42056, avg=4118.08, stdev=11970.21 00:10:47.392 clat percentiles (usec): 00:10:47.392 | 1.00th=[ 241], 5.00th=[ 247], 10.00th=[ 251], 20.00th=[ 253], 00:10:47.392 | 30.00th=[ 258], 40.00th=[ 260], 50.00th=[ 262], 60.00th=[ 265], 00:10:47.392 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 338], 95.00th=[41157], 00:10:47.392 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:47.392 | 99.99th=[42206] 00:10:47.392 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:10:47.392 slat (nsec): min=8751, max=53262, avg=19349.19, stdev=8571.55 00:10:47.392 clat (usec): min=161, max=1032, avg=263.91, stdev=70.83 00:10:47.392 lat (usec): min=180, max=1048, avg=283.26, stdev=71.91 00:10:47.392 clat percentiles (usec): 00:10:47.392 | 1.00th=[ 169], 5.00th=[ 194], 10.00th=[ 208], 20.00th=[ 225], 00:10:47.392 | 30.00th=[ 233], 40.00th=[ 243], 50.00th=[ 251], 60.00th=[ 260], 00:10:47.392 | 70.00th=[ 269], 80.00th=[ 297], 90.00th=[ 326], 95.00th=[ 379], 00:10:47.392 | 99.00th=[ 461], 99.50th=[ 685], 99.90th=[ 1029], 99.95th=[ 1029], 00:10:47.392 | 99.99th=[ 1029] 00:10:47.392 bw ( KiB/s): min= 4096, max= 4096, per=41.56%, avg=4096.00, stdev= 0.00, samples=1 00:10:47.392 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:47.392 lat (usec) : 250=35.86%, 500=60.83%, 750=0.28%, 1000=0.14% 00:10:47.392 lat (msec) : 2=0.14%, 50=2.76% 00:10:47.392 cpu : usr=1.07%, sys=0.88%, ctx=726, majf=0, minf=1 00:10:47.392 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:47.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.392 issued rwts: total=213,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.392 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:47.392 00:10:47.392 Run status group 0 (all jobs): 00:10:47.392 READ: bw=3195KiB/s (3272kB/s), 82.9KiB/s-2290KiB/s (84.9kB/s-2345kB/s), io=3320KiB (3400kB), run=1001-1039msec 00:10:47.392 WRITE: bw=9856KiB/s (10.1MB/s), 1971KiB/s-4092KiB/s (2018kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1039msec 00:10:47.392 00:10:47.392 Disk stats (read/write): 00:10:47.392 nvme0n1: ios=613/1024, merge=0/0, ticks=627/224, in_queue=851, util=88.08% 00:10:47.392 nvme0n2: ios=41/512, merge=0/0, ticks=1653/105, in_queue=1758, util=95.12% 00:10:47.392 nvme0n3: ios=75/512, merge=0/0, ticks=1746/125, in_queue=1871, util=98.12% 00:10:47.392 nvme0n4: ios=256/512, merge=0/0, ticks=1133/136, in_queue=1269, util=98.63% 00:10:47.392 00:53:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:47.392 [global] 00:10:47.392 thread=1 00:10:47.392 invalidate=1 00:10:47.392 rw=write 00:10:47.392 time_based=1 00:10:47.392 runtime=1 00:10:47.392 ioengine=libaio 00:10:47.392 direct=1 00:10:47.392 bs=4096 00:10:47.392 iodepth=128 00:10:47.392 norandommap=0 00:10:47.392 numjobs=1 00:10:47.392 00:10:47.392 verify_dump=1 00:10:47.392 verify_backlog=512 00:10:47.392 verify_state_save=0 00:10:47.392 do_verify=1 00:10:47.392 verify=crc32c-intel 00:10:47.392 [job0] 00:10:47.392 filename=/dev/nvme0n1 00:10:47.392 [job1] 00:10:47.392 filename=/dev/nvme0n2 00:10:47.392 [job2] 00:10:47.392 filename=/dev/nvme0n3 00:10:47.392 [job3] 00:10:47.392 filename=/dev/nvme0n4 00:10:47.392 Could not set queue depth (nvme0n1) 00:10:47.392 Could not set queue depth (nvme0n2) 00:10:47.392 Could not set queue depth (nvme0n3) 00:10:47.392 Could not set queue depth (nvme0n4) 00:10:47.651 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:47.651 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:47.651 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:47.651 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:47.651 fio-3.35 00:10:47.651 Starting 4 threads 00:10:49.070 00:10:49.070 job0: (groupid=0, jobs=1): err= 0: pid=1751834: Fri Jul 26 00:53:19 2024 00:10:49.070 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:10:49.070 slat (usec): min=2, max=9630, avg=116.56, stdev=656.72 00:10:49.070 clat (usec): min=3442, max=47449, avg=15469.13, stdev=6993.81 00:10:49.070 lat (usec): min=4100, max=47459, avg=15585.69, stdev=7053.48 00:10:49.070 clat percentiles (usec): 00:10:49.070 | 1.00th=[ 9241], 5.00th=[10290], 10.00th=[10945], 20.00th=[11076], 00:10:49.070 | 30.00th=[11338], 40.00th=[11600], 50.00th=[12125], 60.00th=[12911], 00:10:49.070 | 70.00th=[15533], 80.00th=[20317], 90.00th=[25297], 95.00th=[32113], 00:10:49.070 | 99.00th=[40633], 99.50th=[43779], 99.90th=[47449], 99.95th=[47449], 00:10:49.070 | 99.99th=[47449] 00:10:49.070 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:10:49.070 slat (usec): min=4, max=13767, avg=103.49, stdev=491.15 00:10:49.070 clat (usec): min=4134, max=45626, avg=13805.10, stdev=4847.56 00:10:49.070 lat (usec): min=4143, max=45636, avg=13908.60, stdev=4880.75 00:10:49.070 clat percentiles (usec): 00:10:49.070 | 1.00th=[ 8029], 5.00th=[10421], 10.00th=[11207], 20.00th=[11469], 00:10:49.070 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12256], 60.00th=[12518], 00:10:49.070 | 70.00th=[12780], 80.00th=[14615], 90.00th=[20055], 95.00th=[22938], 00:10:49.070 | 99.00th=[36963], 99.50th=[41157], 99.90th=[41157], 99.95th=[45876], 00:10:49.070 | 99.99th=[45876] 00:10:49.070 bw ( KiB/s): min=13482, max=22424, per=27.58%, avg=17953.00, stdev=6322.95, samples=2 00:10:49.070 iops : min= 3370, max= 5606, avg=4488.00, stdev=1581.09, samples=2 00:10:49.070 lat (msec) : 4=0.01%, 10=3.78%, 20=80.32%, 50=15.89% 00:10:49.070 cpu : usr=5.88%, sys=9.27%, ctx=559, majf=0, minf=1 00:10:49.070 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:49.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.070 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:49.070 issued rwts: total=4100,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.070 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:49.070 job1: (groupid=0, jobs=1): err= 0: pid=1751835: Fri Jul 26 00:53:19 2024 00:10:49.070 read: IOPS=3035, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1012msec) 00:10:49.070 slat (usec): min=2, max=14557, avg=105.80, stdev=735.13 00:10:49.070 clat (usec): min=4961, max=72274, avg=14343.90, stdev=7930.84 00:10:49.070 lat (usec): min=4970, max=72291, avg=14449.70, stdev=8015.31 00:10:49.070 clat percentiles (usec): 00:10:49.070 | 1.00th=[ 5669], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10159], 00:10:49.070 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11863], 60.00th=[12911], 00:10:49.070 | 70.00th=[14222], 80.00th=[16450], 90.00th=[19006], 95.00th=[28967], 00:10:49.070 | 99.00th=[52691], 99.50th=[59507], 99.90th=[71828], 99.95th=[71828], 00:10:49.070 | 99.99th=[71828] 00:10:49.070 write: IOPS=3436, BW=13.4MiB/s (14.1MB/s)(13.6MiB/1012msec); 0 zone resets 00:10:49.070 slat (usec): min=4, max=13101, avg=158.72, stdev=760.94 00:10:49.070 clat (usec): min=2865, max=72205, avg=24245.85, stdev=14527.66 00:10:49.070 lat (usec): min=2872, max=72214, avg=24404.57, stdev=14624.44 00:10:49.070 clat percentiles (usec): 00:10:49.070 | 1.00th=[ 5800], 5.00th=[ 8291], 10.00th=[10028], 20.00th=[10683], 00:10:49.070 | 30.00th=[12125], 40.00th=[17957], 50.00th=[21890], 60.00th=[23462], 00:10:49.070 | 70.00th=[28181], 80.00th=[36439], 90.00th=[46400], 95.00th=[56361], 00:10:49.070 | 99.00th=[63701], 99.50th=[63701], 99.90th=[68682], 99.95th=[71828], 00:10:49.070 | 99.99th=[71828] 00:10:49.070 bw ( KiB/s): min= 9984, max=16816, per=20.59%, avg=13400.00, stdev=4830.95, samples=2 00:10:49.070 iops : min= 2496, max= 4204, avg=3350.00, stdev=1207.74, samples=2 00:10:49.070 lat (msec) : 4=0.09%, 10=11.80%, 20=54.93%, 50=28.21%, 100=4.96% 00:10:49.070 cpu : usr=4.15%, sys=5.74%, ctx=359, majf=0, minf=1 00:10:49.070 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:10:49.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.070 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:49.070 issued rwts: total=3072,3478,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.070 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:49.070 job2: (groupid=0, jobs=1): err= 0: pid=1751836: Fri Jul 26 00:53:19 2024 00:10:49.070 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:10:49.070 slat (usec): min=2, max=15395, avg=107.27, stdev=708.23 00:10:49.070 clat (usec): min=5755, max=53131, avg=14070.96, stdev=5703.16 00:10:49.070 lat (usec): min=5761, max=53137, avg=14178.23, stdev=5752.02 00:10:49.070 clat percentiles (usec): 00:10:49.070 | 1.00th=[ 8848], 5.00th=[10159], 10.00th=[11076], 20.00th=[11863], 00:10:49.070 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12387], 60.00th=[12649], 00:10:49.070 | 70.00th=[13698], 80.00th=[14877], 90.00th=[17433], 95.00th=[21890], 00:10:49.070 | 99.00th=[43254], 99.50th=[52167], 99.90th=[53216], 99.95th=[53216], 00:10:49.070 | 99.99th=[53216] 00:10:49.070 write: IOPS=4770, BW=18.6MiB/s (19.5MB/s)(18.7MiB/1006msec); 0 zone resets 00:10:49.070 slat (usec): min=4, max=15846, avg=87.84, stdev=601.51 00:10:49.070 clat (usec): min=1580, max=57148, avg=13079.55, stdev=5007.52 00:10:49.070 lat (usec): min=1593, max=57158, avg=13167.38, stdev=5031.70 00:10:49.070 clat percentiles (usec): 00:10:49.070 | 1.00th=[ 6849], 5.00th=[ 8848], 10.00th=[10028], 20.00th=[11600], 00:10:49.070 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12387], 60.00th=[12649], 00:10:49.070 | 70.00th=[12911], 80.00th=[13435], 90.00th=[14091], 95.00th=[19268], 00:10:49.070 | 99.00th=[35914], 99.50th=[56886], 99.90th=[56886], 99.95th=[56886], 00:10:49.070 | 99.99th=[57410] 00:10:49.070 bw ( KiB/s): min=16400, max=20976, per=28.71%, avg=18688.00, stdev=3235.72, samples=2 00:10:49.070 iops : min= 4100, max= 5244, avg=4672.00, stdev=808.93, samples=2 00:10:49.070 lat (msec) : 2=0.02%, 10=7.06%, 20=88.09%, 50=4.12%, 100=0.70% 00:10:49.070 cpu : usr=5.97%, sys=9.45%, ctx=358, majf=0, minf=1 00:10:49.070 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:49.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.070 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:49.070 issued rwts: total=4608,4799,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.070 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:49.070 job3: (groupid=0, jobs=1): err= 0: pid=1751837: Fri Jul 26 00:53:19 2024 00:10:49.070 read: IOPS=3364, BW=13.1MiB/s (13.8MB/s)(13.2MiB/1007msec) 00:10:49.070 slat (usec): min=3, max=11783, avg=135.55, stdev=873.15 00:10:49.070 clat (usec): min=3672, max=45439, avg=17713.05, stdev=6783.38 00:10:49.070 lat (usec): min=6809, max=45464, avg=17848.61, stdev=6860.81 00:10:49.070 clat percentiles (usec): 00:10:49.070 | 1.00th=[ 8586], 5.00th=[11863], 10.00th=[12387], 20.00th=[13042], 00:10:49.070 | 30.00th=[13829], 40.00th=[15008], 50.00th=[15795], 60.00th=[16188], 00:10:49.070 | 70.00th=[16712], 80.00th=[22414], 90.00th=[28967], 95.00th=[31327], 00:10:49.070 | 99.00th=[40109], 99.50th=[44827], 99.90th=[45351], 99.95th=[45351], 00:10:49.070 | 99.99th=[45351] 00:10:49.070 write: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec); 0 zone resets 00:10:49.070 slat (usec): min=4, max=19886, avg=141.02, stdev=853.60 00:10:49.070 clat (usec): min=7019, max=59512, avg=18750.12, stdev=9850.70 00:10:49.070 lat (usec): min=7390, max=59520, avg=18891.13, stdev=9920.58 00:10:49.070 clat percentiles (usec): 00:10:49.070 | 1.00th=[ 9765], 5.00th=[11994], 10.00th=[12125], 20.00th=[12387], 00:10:49.070 | 30.00th=[13435], 40.00th=[13960], 50.00th=[14353], 60.00th=[15008], 00:10:49.070 | 70.00th=[19530], 80.00th=[23987], 90.00th=[27395], 95.00th=[42206], 00:10:49.070 | 99.00th=[58983], 99.50th=[59507], 99.90th=[59507], 99.95th=[59507], 00:10:49.070 | 99.99th=[59507] 00:10:49.070 bw ( KiB/s): min=12304, max=16368, per=22.02%, avg=14336.00, stdev=2873.68, samples=2 00:10:49.070 iops : min= 3076, max= 4092, avg=3584.00, stdev=718.42, samples=2 00:10:49.070 lat (msec) : 4=0.01%, 10=1.66%, 20=71.96%, 50=24.99%, 100=1.38% 00:10:49.070 cpu : usr=4.17%, sys=8.15%, ctx=260, majf=0, minf=1 00:10:49.070 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:49.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.070 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:49.070 issued rwts: total=3388,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.070 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:49.070 00:10:49.070 Run status group 0 (all jobs): 00:10:49.070 READ: bw=58.5MiB/s (61.4MB/s), 11.9MiB/s-17.9MiB/s (12.4MB/s-18.8MB/s), io=59.2MiB (62.1MB), run=1004-1012msec 00:10:49.070 WRITE: bw=63.6MiB/s (66.7MB/s), 13.4MiB/s-18.6MiB/s (14.1MB/s-19.5MB/s), io=64.3MiB (67.5MB), run=1004-1012msec 00:10:49.070 00:10:49.070 Disk stats (read/write): 00:10:49.070 nvme0n1: ios=3374/3584, merge=0/0, ticks=21925/20462, in_queue=42387, util=96.49% 00:10:49.070 nvme0n2: ios=2580/3072, merge=0/0, ticks=32186/61679, in_queue=93865, util=96.33% 00:10:49.070 nvme0n3: ios=3630/4095, merge=0/0, ticks=31237/35363, in_queue=66600, util=98.95% 00:10:49.070 nvme0n4: ios=2779/3072, merge=0/0, ticks=20379/25774, in_queue=46153, util=97.98% 00:10:49.070 00:53:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:49.070 [global] 00:10:49.070 thread=1 00:10:49.070 invalidate=1 00:10:49.070 rw=randwrite 00:10:49.070 time_based=1 00:10:49.070 runtime=1 00:10:49.071 ioengine=libaio 00:10:49.071 direct=1 00:10:49.071 bs=4096 00:10:49.071 iodepth=128 00:10:49.071 norandommap=0 00:10:49.071 numjobs=1 00:10:49.071 00:10:49.071 verify_dump=1 00:10:49.071 verify_backlog=512 00:10:49.071 verify_state_save=0 00:10:49.071 do_verify=1 00:10:49.071 verify=crc32c-intel 00:10:49.071 [job0] 00:10:49.071 filename=/dev/nvme0n1 00:10:49.071 [job1] 00:10:49.071 filename=/dev/nvme0n2 00:10:49.071 [job2] 00:10:49.071 filename=/dev/nvme0n3 00:10:49.071 [job3] 00:10:49.071 filename=/dev/nvme0n4 00:10:49.071 Could not set queue depth (nvme0n1) 00:10:49.071 Could not set queue depth (nvme0n2) 00:10:49.071 Could not set queue depth (nvme0n3) 00:10:49.071 Could not set queue depth (nvme0n4) 00:10:49.071 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:49.071 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:49.071 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:49.071 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:49.071 fio-3.35 00:10:49.071 Starting 4 threads 00:10:50.451 00:10:50.451 job0: (groupid=0, jobs=1): err= 0: pid=1752067: Fri Jul 26 00:53:20 2024 00:10:50.451 read: IOPS=2770, BW=10.8MiB/s (11.3MB/s)(10.8MiB/1002msec) 00:10:50.451 slat (usec): min=3, max=17032, avg=141.93, stdev=893.10 00:10:50.451 clat (usec): min=1228, max=53680, avg=19823.07, stdev=9576.15 00:10:50.451 lat (usec): min=2713, max=53714, avg=19965.00, stdev=9656.79 00:10:50.451 clat percentiles (usec): 00:10:50.451 | 1.00th=[ 3195], 5.00th=[ 6063], 10.00th=[ 8717], 20.00th=[12125], 00:10:50.451 | 30.00th=[13698], 40.00th=[15008], 50.00th=[18744], 60.00th=[22414], 00:10:50.451 | 70.00th=[23200], 80.00th=[26084], 90.00th=[34866], 95.00th=[39584], 00:10:50.451 | 99.00th=[44303], 99.50th=[47449], 99.90th=[50070], 99.95th=[52167], 00:10:50.451 | 99.99th=[53740] 00:10:50.451 write: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec); 0 zone resets 00:10:50.451 slat (usec): min=4, max=13907, avg=168.69, stdev=942.29 00:10:50.451 clat (usec): min=297, max=55028, avg=23242.17, stdev=10310.51 00:10:50.451 lat (usec): min=340, max=55036, avg=23410.87, stdev=10391.88 00:10:50.451 clat percentiles (usec): 00:10:50.451 | 1.00th=[ 1893], 5.00th=[ 9503], 10.00th=[10683], 20.00th=[13173], 00:10:50.451 | 30.00th=[16319], 40.00th=[20055], 50.00th=[22938], 60.00th=[25822], 00:10:50.451 | 70.00th=[27919], 80.00th=[31589], 90.00th=[37487], 95.00th=[41681], 00:10:50.452 | 99.00th=[50594], 99.50th=[51119], 99.90th=[54789], 99.95th=[54789], 00:10:50.452 | 99.99th=[54789] 00:10:50.452 bw ( KiB/s): min= 9592, max=14984, per=20.23%, avg=12288.00, stdev=3812.72, samples=2 00:10:50.452 iops : min= 2398, max= 3746, avg=3072.00, stdev=953.18, samples=2 00:10:50.452 lat (usec) : 500=0.02%, 750=0.09% 00:10:50.452 lat (msec) : 2=0.53%, 4=1.33%, 10=7.71%, 20=37.76%, 50=51.92% 00:10:50.452 lat (msec) : 100=0.65% 00:10:50.452 cpu : usr=4.10%, sys=5.69%, ctx=297, majf=0, minf=11 00:10:50.452 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:10:50.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:50.452 issued rwts: total=2776,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.452 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:50.452 job1: (groupid=0, jobs=1): err= 0: pid=1752068: Fri Jul 26 00:53:20 2024 00:10:50.452 read: IOPS=3878, BW=15.1MiB/s (15.9MB/s)(15.2MiB/1003msec) 00:10:50.452 slat (usec): min=2, max=11979, avg=127.79, stdev=755.23 00:10:50.452 clat (usec): min=571, max=63211, avg=16985.16, stdev=7338.14 00:10:50.452 lat (usec): min=581, max=65584, avg=17112.95, stdev=7367.54 00:10:50.452 clat percentiles (usec): 00:10:50.452 | 1.00th=[ 8717], 5.00th=[ 9765], 10.00th=[10814], 20.00th=[11863], 00:10:50.452 | 30.00th=[12256], 40.00th=[12518], 50.00th=[13042], 60.00th=[15664], 00:10:50.452 | 70.00th=[19792], 80.00th=[23200], 90.00th=[29754], 95.00th=[32900], 00:10:50.452 | 99.00th=[35914], 99.50th=[38011], 99.90th=[38011], 99.95th=[38011], 00:10:50.452 | 99.99th=[63177] 00:10:50.452 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:10:50.452 slat (usec): min=4, max=12648, avg=108.90, stdev=708.21 00:10:50.452 clat (usec): min=6050, max=38675, avg=14836.83, stdev=5477.49 00:10:50.452 lat (usec): min=6058, max=38694, avg=14945.74, stdev=5535.26 00:10:50.452 clat percentiles (usec): 00:10:50.452 | 1.00th=[ 7439], 5.00th=[ 9372], 10.00th=[10028], 20.00th=[11207], 00:10:50.452 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12387], 60.00th=[13566], 00:10:50.452 | 70.00th=[15401], 80.00th=[17957], 90.00th=[25035], 95.00th=[26608], 00:10:50.452 | 99.00th=[31851], 99.50th=[32375], 99.90th=[36439], 99.95th=[37487], 00:10:50.452 | 99.99th=[38536] 00:10:50.452 bw ( KiB/s): min=14584, max=18184, per=26.97%, avg=16384.00, stdev=2545.58, samples=2 00:10:50.452 iops : min= 3646, max= 4546, avg=4096.00, stdev=636.40, samples=2 00:10:50.452 lat (usec) : 750=0.15% 00:10:50.452 lat (msec) : 4=0.01%, 10=8.34%, 20=68.46%, 50=23.03%, 100=0.01% 00:10:50.452 cpu : usr=5.99%, sys=7.68%, ctx=342, majf=0, minf=15 00:10:50.452 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:50.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:50.452 issued rwts: total=3890,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.452 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:50.452 job2: (groupid=0, jobs=1): err= 0: pid=1752069: Fri Jul 26 00:53:20 2024 00:10:50.452 read: IOPS=4358, BW=17.0MiB/s (17.9MB/s)(17.1MiB/1006msec) 00:10:50.452 slat (usec): min=2, max=14436, avg=113.77, stdev=586.95 00:10:50.452 clat (usec): min=3583, max=41701, avg=14725.96, stdev=5280.30 00:10:50.452 lat (usec): min=5944, max=41718, avg=14839.73, stdev=5310.64 00:10:50.452 clat percentiles (usec): 00:10:50.452 | 1.00th=[ 9110], 5.00th=[10683], 10.00th=[11076], 20.00th=[11600], 00:10:50.452 | 30.00th=[12518], 40.00th=[13173], 50.00th=[13698], 60.00th=[14353], 00:10:50.452 | 70.00th=[14746], 80.00th=[15270], 90.00th=[17433], 95.00th=[26870], 00:10:50.452 | 99.00th=[40109], 99.50th=[40109], 99.90th=[41681], 99.95th=[41681], 00:10:50.452 | 99.99th=[41681] 00:10:50.452 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:10:50.452 slat (usec): min=3, max=7968, avg=97.70, stdev=498.20 00:10:50.452 clat (usec): min=6020, max=38943, avg=13582.77, stdev=4264.31 00:10:50.452 lat (usec): min=6033, max=38959, avg=13680.47, stdev=4275.18 00:10:50.452 clat percentiles (usec): 00:10:50.452 | 1.00th=[ 8717], 5.00th=[ 9896], 10.00th=[10683], 20.00th=[11338], 00:10:50.452 | 30.00th=[11863], 40.00th=[12387], 50.00th=[12649], 60.00th=[13042], 00:10:50.452 | 70.00th=[13435], 80.00th=[14484], 90.00th=[15401], 95.00th=[24249], 00:10:50.452 | 99.00th=[32113], 99.50th=[38011], 99.90th=[39060], 99.95th=[39060], 00:10:50.452 | 99.99th=[39060] 00:10:50.452 bw ( KiB/s): min=16384, max=20480, per=30.34%, avg=18432.00, stdev=2896.31, samples=2 00:10:50.452 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:10:50.452 lat (msec) : 4=0.01%, 10=3.98%, 20=88.18%, 50=7.83% 00:10:50.452 cpu : usr=6.97%, sys=9.75%, ctx=470, majf=0, minf=9 00:10:50.452 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:50.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:50.452 issued rwts: total=4385,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.452 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:50.452 job3: (groupid=0, jobs=1): err= 0: pid=1752073: Fri Jul 26 00:53:20 2024 00:10:50.452 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:10:50.452 slat (usec): min=3, max=11810, avg=140.02, stdev=830.06 00:10:50.452 clat (usec): min=8665, max=51081, avg=18523.03, stdev=7825.49 00:10:50.452 lat (usec): min=8684, max=51120, avg=18663.05, stdev=7899.34 00:10:50.452 clat percentiles (usec): 00:10:50.452 | 1.00th=[ 9503], 5.00th=[11863], 10.00th=[12387], 20.00th=[12780], 00:10:50.452 | 30.00th=[14091], 40.00th=[15533], 50.00th=[16319], 60.00th=[16712], 00:10:50.452 | 70.00th=[18744], 80.00th=[21890], 90.00th=[32637], 95.00th=[39060], 00:10:50.452 | 99.00th=[45351], 99.50th=[47449], 99.90th=[47973], 99.95th=[47973], 00:10:50.452 | 99.99th=[51119] 00:10:50.452 write: IOPS=3485, BW=13.6MiB/s (14.3MB/s)(13.7MiB/1005msec); 0 zone resets 00:10:50.452 slat (usec): min=4, max=14682, avg=151.02, stdev=874.79 00:10:50.452 clat (usec): min=4580, max=56761, avg=19935.94, stdev=8862.87 00:10:50.452 lat (usec): min=5187, max=56776, avg=20086.96, stdev=8950.56 00:10:50.452 clat percentiles (usec): 00:10:50.452 | 1.00th=[ 8029], 5.00th=[11469], 10.00th=[11731], 20.00th=[11994], 00:10:50.452 | 30.00th=[12780], 40.00th=[14484], 50.00th=[19268], 60.00th=[22676], 00:10:50.452 | 70.00th=[22938], 80.00th=[23987], 90.00th=[30802], 95.00th=[36963], 00:10:50.452 | 99.00th=[51643], 99.50th=[53740], 99.90th=[56886], 99.95th=[56886], 00:10:50.452 | 99.99th=[56886] 00:10:50.452 bw ( KiB/s): min=12408, max=14600, per=22.23%, avg=13504.00, stdev=1549.98, samples=2 00:10:50.452 iops : min= 3102, max= 3650, avg=3376.00, stdev=387.49, samples=2 00:10:50.452 lat (msec) : 10=2.72%, 20=60.56%, 50=36.02%, 100=0.70% 00:10:50.452 cpu : usr=4.68%, sys=8.47%, ctx=305, majf=0, minf=17 00:10:50.452 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:10:50.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:50.452 issued rwts: total=3072,3503,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.452 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:50.452 00:10:50.452 Run status group 0 (all jobs): 00:10:50.452 READ: bw=54.8MiB/s (57.5MB/s), 10.8MiB/s-17.0MiB/s (11.3MB/s-17.9MB/s), io=55.2MiB (57.8MB), run=1002-1006msec 00:10:50.452 WRITE: bw=59.3MiB/s (62.2MB/s), 12.0MiB/s-17.9MiB/s (12.6MB/s-18.8MB/s), io=59.7MiB (62.6MB), run=1002-1006msec 00:10:50.452 00:10:50.452 Disk stats (read/write): 00:10:50.452 nvme0n1: ios=2101/2319, merge=0/0, ticks=23352/32308, in_queue=55660, util=97.90% 00:10:50.452 nvme0n2: ios=3489/3584, merge=0/0, ticks=20068/17745, in_queue=37813, util=97.87% 00:10:50.452 nvme0n3: ios=4151/4127, merge=0/0, ticks=16099/14282, in_queue=30381, util=95.20% 00:10:50.452 nvme0n4: ios=2611/2759, merge=0/0, ticks=24452/26985, in_queue=51437, util=97.68% 00:10:50.452 00:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:50.452 00:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1752221 00:10:50.452 00:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:50.452 00:53:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:50.452 [global] 00:10:50.452 thread=1 00:10:50.452 invalidate=1 00:10:50.452 rw=read 00:10:50.452 time_based=1 00:10:50.452 runtime=10 00:10:50.452 ioengine=libaio 00:10:50.452 direct=1 00:10:50.452 bs=4096 00:10:50.452 iodepth=1 00:10:50.452 norandommap=1 00:10:50.452 numjobs=1 00:10:50.452 00:10:50.452 [job0] 00:10:50.452 filename=/dev/nvme0n1 00:10:50.452 [job1] 00:10:50.452 filename=/dev/nvme0n2 00:10:50.452 [job2] 00:10:50.452 filename=/dev/nvme0n3 00:10:50.452 [job3] 00:10:50.452 filename=/dev/nvme0n4 00:10:50.452 Could not set queue depth (nvme0n1) 00:10:50.452 Could not set queue depth (nvme0n2) 00:10:50.452 Could not set queue depth (nvme0n3) 00:10:50.452 Could not set queue depth (nvme0n4) 00:10:50.452 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:50.452 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:50.452 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:50.452 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:50.452 fio-3.35 00:10:50.452 Starting 4 threads 00:10:53.739 00:53:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:53.739 00:53:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:53.739 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=33710080, buflen=4096 00:10:53.739 fio: pid=1752422, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:53.739 00:53:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:53.739 00:53:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:53.997 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=17137664, buflen=4096 00:10:53.997 fio: pid=1752421, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:54.257 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=50167808, buflen=4096 00:10:54.257 fio: pid=1752419, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:54.257 00:53:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:54.257 00:53:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:54.518 00:53:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:54.518 00:53:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:54.518 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=491520, buflen=4096 00:10:54.518 fio: pid=1752420, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:10:54.518 00:10:54.518 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1752419: Fri Jul 26 00:53:24 2024 00:10:54.518 read: IOPS=3523, BW=13.8MiB/s (14.4MB/s)(47.8MiB/3476msec) 00:10:54.518 slat (usec): min=4, max=10634, avg=12.15, stdev=130.64 00:10:54.518 clat (usec): min=202, max=2088, avg=267.11, stdev=50.82 00:10:54.518 lat (usec): min=207, max=10971, avg=279.25, stdev=141.61 00:10:54.518 clat percentiles (usec): 00:10:54.518 | 1.00th=[ 210], 5.00th=[ 219], 10.00th=[ 225], 20.00th=[ 231], 00:10:54.518 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 265], 00:10:54.518 | 70.00th=[ 277], 80.00th=[ 306], 90.00th=[ 334], 95.00th=[ 363], 00:10:54.518 | 99.00th=[ 420], 99.50th=[ 469], 99.90th=[ 545], 99.95th=[ 553], 00:10:54.518 | 99.99th=[ 627] 00:10:54.518 bw ( KiB/s): min=11504, max=15720, per=53.00%, avg=13961.33, stdev=1532.36, samples=6 00:10:54.518 iops : min= 2876, max= 3930, avg=3490.33, stdev=383.09, samples=6 00:10:54.518 lat (usec) : 250=52.82%, 500=46.82%, 750=0.34% 00:10:54.518 lat (msec) : 4=0.01% 00:10:54.518 cpu : usr=1.55%, sys=5.12%, ctx=12252, majf=0, minf=1 00:10:54.518 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:54.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.518 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.518 issued rwts: total=12249,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.518 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:54.518 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=1752420: Fri Jul 26 00:53:24 2024 00:10:54.518 read: IOPS=32, BW=128KiB/s (131kB/s)(480KiB/3763msec) 00:10:54.518 slat (usec): min=6, max=11869, avg=325.60, stdev=1718.26 00:10:54.518 clat (usec): min=250, max=42336, avg=31019.71, stdev=17768.39 00:10:54.518 lat (usec): min=263, max=53989, avg=31291.41, stdev=18002.68 00:10:54.518 clat percentiles (usec): 00:10:54.518 | 1.00th=[ 289], 5.00th=[ 326], 10.00th=[ 371], 20.00th=[ 416], 00:10:54.518 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:54.518 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:10:54.518 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:54.518 | 99.99th=[42206] 00:10:54.518 bw ( KiB/s): min= 96, max= 216, per=0.49%, avg=129.71, stdev=46.30, samples=7 00:10:54.518 iops : min= 24, max= 54, avg=32.43, stdev=11.57, samples=7 00:10:54.518 lat (usec) : 500=23.14%, 750=1.65% 00:10:54.518 lat (msec) : 50=74.38% 00:10:54.518 cpu : usr=0.00%, sys=0.27%, ctx=126, majf=0, minf=1 00:10:54.518 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:54.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.518 complete : 0=0.8%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.518 issued rwts: total=121,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.518 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:54.518 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1752421: Fri Jul 26 00:53:24 2024 00:10:54.518 read: IOPS=1291, BW=5164KiB/s (5288kB/s)(16.3MiB/3241msec) 00:10:54.518 slat (nsec): min=5205, max=51914, avg=12040.01, stdev=5982.57 00:10:54.518 clat (usec): min=240, max=41427, avg=753.98, stdev=4121.37 00:10:54.518 lat (usec): min=251, max=41459, avg=766.02, stdev=4122.11 00:10:54.518 clat percentiles (usec): 00:10:54.518 | 1.00th=[ 258], 5.00th=[ 273], 10.00th=[ 285], 20.00th=[ 297], 00:10:54.518 | 30.00th=[ 306], 40.00th=[ 314], 50.00th=[ 318], 60.00th=[ 330], 00:10:54.518 | 70.00th=[ 338], 80.00th=[ 359], 90.00th=[ 396], 95.00th=[ 445], 00:10:54.518 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:54.518 | 99.99th=[41681] 00:10:54.518 bw ( KiB/s): min= 112, max=10408, per=21.14%, avg=5570.67, stdev=4618.67, samples=6 00:10:54.518 iops : min= 28, max= 2602, avg=1392.67, stdev=1154.67, samples=6 00:10:54.518 lat (usec) : 250=0.24%, 500=97.47%, 750=1.19% 00:10:54.518 lat (msec) : 4=0.02%, 50=1.05% 00:10:54.518 cpu : usr=0.93%, sys=2.56%, ctx=4186, majf=0, minf=1 00:10:54.518 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:54.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.518 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.518 issued rwts: total=4185,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.518 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:54.518 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1752422: Fri Jul 26 00:53:24 2024 00:10:54.518 read: IOPS=2825, BW=11.0MiB/s (11.6MB/s)(32.1MiB/2913msec) 00:10:54.518 slat (nsec): min=4567, max=68382, avg=14985.36, stdev=9071.74 00:10:54.518 clat (usec): min=210, max=41499, avg=332.64, stdev=1186.56 00:10:54.518 lat (usec): min=220, max=41532, avg=347.62, stdev=1186.79 00:10:54.518 clat percentiles (usec): 00:10:54.518 | 1.00th=[ 225], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 243], 00:10:54.518 | 30.00th=[ 251], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 293], 00:10:54.518 | 70.00th=[ 318], 80.00th=[ 343], 90.00th=[ 388], 95.00th=[ 445], 00:10:54.518 | 99.00th=[ 515], 99.50th=[ 537], 99.90th=[ 1237], 99.95th=[40633], 00:10:54.518 | 99.99th=[41681] 00:10:54.518 bw ( KiB/s): min= 8000, max=15200, per=44.11%, avg=11619.20, stdev=2782.30, samples=5 00:10:54.518 iops : min= 2000, max= 3800, avg=2904.80, stdev=695.58, samples=5 00:10:54.518 lat (usec) : 250=29.11%, 500=69.24%, 750=1.52% 00:10:54.518 lat (msec) : 2=0.02%, 4=0.01%, 50=0.09% 00:10:54.518 cpu : usr=2.16%, sys=4.64%, ctx=8231, majf=0, minf=1 00:10:54.518 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:54.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.518 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.518 issued rwts: total=8231,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.518 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:54.518 00:10:54.518 Run status group 0 (all jobs): 00:10:54.518 READ: bw=25.7MiB/s (27.0MB/s), 128KiB/s-13.8MiB/s (131kB/s-14.4MB/s), io=96.8MiB (102MB), run=2913-3763msec 00:10:54.518 00:10:54.518 Disk stats (read/write): 00:10:54.518 nvme0n1: ios=11811/0, merge=0/0, ticks=3051/0, in_queue=3051, util=95.42% 00:10:54.518 nvme0n2: ios=116/0, merge=0/0, ticks=3556/0, in_queue=3556, util=95.82% 00:10:54.518 nvme0n3: ios=4181/0, merge=0/0, ticks=2988/0, in_queue=2988, util=96.79% 00:10:54.518 nvme0n4: ios=8147/0, merge=0/0, ticks=2621/0, in_queue=2621, util=96.75% 00:10:54.776 00:53:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:54.776 00:53:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:55.034 00:53:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:55.034 00:53:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:55.291 00:53:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:55.291 00:53:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:55.548 00:53:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:55.548 00:53:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:55.806 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:55.806 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1752221 00:10:55.806 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:55.806 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:55.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.806 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:55.806 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:55.806 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:55.806 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:55.806 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:55.806 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:55.806 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:55.806 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:55.806 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:55.806 nvmf hotplug test: fio failed as expected 00:10:55.806 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:56.064 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:56.064 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:56.064 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:56.064 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:56.064 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:56.064 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:56.064 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:10:56.064 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:56.064 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:10:56.064 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:56.064 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:56.064 rmmod nvme_tcp 00:10:56.064 rmmod nvme_fabrics 00:10:56.324 rmmod nvme_keyring 00:10:56.324 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:56.324 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:10:56.324 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:10:56.324 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1750292 ']' 00:10:56.324 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1750292 00:10:56.324 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1750292 ']' 00:10:56.324 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1750292 00:10:56.324 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:56.324 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:56.324 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1750292 00:10:56.324 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:56.324 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:56.324 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1750292' 00:10:56.324 killing process with pid 1750292 00:10:56.324 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1750292 00:10:56.324 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1750292 00:10:56.584 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:56.584 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:56.584 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:56.584 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:56.584 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:56.584 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.584 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:56.584 00:53:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.492 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:58.492 00:10:58.492 real 0m23.547s 00:10:58.492 user 1m22.239s 00:10:58.492 sys 0m7.095s 00:10:58.492 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:58.492 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.492 ************************************ 00:10:58.492 END TEST nvmf_fio_target 00:10:58.492 ************************************ 00:10:58.492 00:53:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:58.492 00:53:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:58.492 00:53:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:58.492 00:53:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:58.492 ************************************ 00:10:58.492 START TEST nvmf_bdevio 00:10:58.492 ************************************ 00:10:58.492 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:58.751 * Looking for test storage... 00:10:58.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:10:58.751 00:53:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:00.691 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:00.691 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:00.691 00:53:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:00.691 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:00.691 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:00.691 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:00.691 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.691 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:00.691 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.691 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:00.691 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:00.691 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.691 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:00.691 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:00.691 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.691 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:00.691 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.691 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:00.691 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.691 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:00.691 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:00.691 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.691 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:00.691 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:00.691 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.691 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:00.691 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:11:00.691 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:00.691 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:00.691 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:00.691 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:00.691 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:00.691 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:00.691 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:00.691 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:00.691 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:00.691 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:00.691 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:00.691 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:00.691 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:00.691 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:00.691 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:00.691 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:00.691 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:00.691 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:00.691 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:00.691 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:00.951 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:00.951 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:00.951 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:00.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:00.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:11:00.951 00:11:00.951 --- 10.0.0.2 ping statistics --- 00:11:00.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.951 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:11:00.951 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:00.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:00.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:11:00.951 00:11:00.951 --- 10.0.0.1 ping statistics --- 00:11:00.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.951 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:11:00.951 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:00.951 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:11:00.951 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:00.951 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:00.951 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:00.951 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:00.951 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:00.951 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:00.951 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:00.951 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:00.951 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:00.951 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:00.951 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:00.951 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1755050 00:11:00.951 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:00.951 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1755050 00:11:00.951 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1755050 ']' 00:11:00.951 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.951 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:00.951 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.951 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:00.951 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:00.951 [2024-07-26 00:53:31.234641] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:11:00.951 [2024-07-26 00:53:31.234735] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:00.951 EAL: No free 2048 kB hugepages reported on node 1 00:11:00.951 [2024-07-26 00:53:31.310695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:01.209 [2024-07-26 00:53:31.411869] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:01.209 [2024-07-26 00:53:31.411933] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:01.209 [2024-07-26 00:53:31.411949] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:01.209 [2024-07-26 00:53:31.411962] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:01.209 [2024-07-26 00:53:31.411974] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:01.209 [2024-07-26 00:53:31.412071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:01.209 [2024-07-26 00:53:31.412127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:01.209 [2024-07-26 00:53:31.412178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:01.209 [2024-07-26 00:53:31.412181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:01.209 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:01.209 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:11:01.209 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:01.209 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:01.209 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:01.209 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:01.209 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:01.209 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.210 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:01.210 [2024-07-26 00:53:31.562298] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:01.210 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.210 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:01.210 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.210 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:01.210 Malloc0 00:11:01.210 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.210 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:01.210 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.210 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:01.210 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.210 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:01.210 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.210 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:01.210 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.210 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:01.210 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.210 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:01.210 [2024-07-26 00:53:31.613272] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:01.210 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.210 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:01.210 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:01.210 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:11:01.210 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:11:01.210 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:01.210 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:01.210 { 00:11:01.210 "params": { 00:11:01.210 "name": "Nvme$subsystem", 00:11:01.210 "trtype": "$TEST_TRANSPORT", 00:11:01.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:01.210 "adrfam": "ipv4", 00:11:01.210 "trsvcid": "$NVMF_PORT", 00:11:01.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:01.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:01.210 "hdgst": ${hdgst:-false}, 00:11:01.210 "ddgst": ${ddgst:-false} 00:11:01.210 }, 00:11:01.210 "method": "bdev_nvme_attach_controller" 00:11:01.210 } 00:11:01.210 EOF 00:11:01.210 )") 00:11:01.210 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:11:01.210 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:11:01.210 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:11:01.210 00:53:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:01.210 "params": { 00:11:01.210 "name": "Nvme1", 00:11:01.210 "trtype": "tcp", 00:11:01.210 "traddr": "10.0.0.2", 00:11:01.210 "adrfam": "ipv4", 00:11:01.210 "trsvcid": "4420", 00:11:01.210 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:01.210 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:01.210 "hdgst": false, 00:11:01.210 "ddgst": false 00:11:01.210 }, 00:11:01.210 "method": "bdev_nvme_attach_controller" 00:11:01.210 }' 00:11:01.468 [2024-07-26 00:53:31.656475] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:11:01.468 [2024-07-26 00:53:31.656562] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1755083 ] 00:11:01.468 EAL: No free 2048 kB hugepages reported on node 1 00:11:01.468 [2024-07-26 00:53:31.719117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:01.468 [2024-07-26 00:53:31.807612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.468 [2024-07-26 00:53:31.807660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:01.468 [2024-07-26 00:53:31.807663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.728 I/O targets: 00:11:01.728 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:01.728 00:11:01.728 00:11:01.728 CUnit - A unit testing framework for C - Version 2.1-3 00:11:01.728 http://cunit.sourceforge.net/ 00:11:01.728 00:11:01.728 00:11:01.728 Suite: bdevio tests on: Nvme1n1 00:11:01.988 Test: blockdev write read block ...passed 00:11:01.988 Test: blockdev write zeroes read block ...passed 00:11:01.988 Test: blockdev write zeroes read no split ...passed 00:11:01.988 Test: blockdev write zeroes read split ...passed 00:11:01.988 Test: blockdev write zeroes read split partial ...passed 00:11:01.988 Test: blockdev reset ...[2024-07-26 00:53:32.266320] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:01.988 [2024-07-26 00:53:32.266426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc9a60 (9): Bad file descriptor 00:11:01.988 [2024-07-26 00:53:32.295419] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:01.988 passed 00:11:01.988 Test: blockdev write read 8 blocks ...passed 00:11:01.988 Test: blockdev write read size > 128k ...passed 00:11:01.988 Test: blockdev write read invalid size ...passed 00:11:01.988 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:01.988 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:01.988 Test: blockdev write read max offset ...passed 00:11:02.249 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:02.249 Test: blockdev writev readv 8 blocks ...passed 00:11:02.249 Test: blockdev writev readv 30 x 1block ...passed 00:11:02.249 Test: blockdev writev readv block ...passed 00:11:02.249 Test: blockdev writev readv size > 128k ...passed 00:11:02.249 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:02.249 Test: blockdev comparev and writev ...[2024-07-26 00:53:32.595832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:02.249 [2024-07-26 00:53:32.595868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:02.249 [2024-07-26 00:53:32.595892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:02.249 [2024-07-26 00:53:32.595908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:02.249 [2024-07-26 00:53:32.596304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:02.249 [2024-07-26 00:53:32.596331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:02.249 [2024-07-26 00:53:32.596354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:02.249 [2024-07-26 00:53:32.596369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:02.249 [2024-07-26 00:53:32.596744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:02.249 [2024-07-26 00:53:32.596770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:02.249 [2024-07-26 00:53:32.596792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:02.249 [2024-07-26 00:53:32.596808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:02.249 [2024-07-26 00:53:32.597198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:02.249 [2024-07-26 00:53:32.597225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:02.249 [2024-07-26 00:53:32.597248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:02.249 [2024-07-26 00:53:32.597265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:02.249 passed 00:11:02.508 Test: blockdev nvme passthru rw ...passed 00:11:02.508 Test: blockdev nvme passthru vendor specific ...[2024-07-26 00:53:32.679382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:02.508 [2024-07-26 00:53:32.679412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:02.508 [2024-07-26 00:53:32.679571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:02.508 [2024-07-26 00:53:32.679595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:02.508 [2024-07-26 00:53:32.679754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:02.508 [2024-07-26 00:53:32.679776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:02.508 [2024-07-26 00:53:32.679928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:02.508 [2024-07-26 00:53:32.679950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:02.508 passed 00:11:02.508 Test: blockdev nvme admin passthru ...passed 00:11:02.508 Test: blockdev copy ...passed 00:11:02.508 00:11:02.508 Run Summary: Type Total Ran Passed Failed Inactive 00:11:02.508 suites 1 1 n/a 0 0 00:11:02.508 tests 23 23 23 0 0 00:11:02.508 asserts 152 152 152 0 n/a 00:11:02.508 00:11:02.508 Elapsed time = 1.269 seconds 00:11:02.508 00:53:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:02.508 00:53:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.508 00:53:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:02.508 00:53:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.508 00:53:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:02.508 00:53:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:02.508 00:53:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:02.508 00:53:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:11:02.768 00:53:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:02.768 00:53:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:11:02.768 00:53:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:02.769 00:53:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:02.769 rmmod nvme_tcp 00:11:02.769 rmmod nvme_fabrics 00:11:02.769 rmmod nvme_keyring 00:11:02.769 00:53:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:02.769 00:53:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:11:02.769 00:53:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:11:02.769 00:53:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1755050 ']' 00:11:02.769 00:53:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1755050 00:11:02.769 00:53:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1755050 ']' 00:11:02.769 00:53:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1755050 00:11:02.769 00:53:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:11:02.769 00:53:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:02.769 00:53:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1755050 00:11:02.769 00:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:11:02.769 00:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:11:02.769 00:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1755050' 00:11:02.769 killing process with pid 1755050 00:11:02.769 00:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1755050 00:11:02.769 00:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1755050 00:11:03.029 00:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:03.029 00:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:03.029 00:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:03.029 00:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:03.029 00:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:03.029 00:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.029 00:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.029 00:53:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.981 00:53:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:04.981 00:11:04.981 real 0m6.426s 00:11:04.981 user 0m10.576s 00:11:04.981 sys 0m2.120s 00:11:04.981 00:53:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:04.981 00:53:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:04.981 ************************************ 00:11:04.981 END TEST nvmf_bdevio 00:11:04.981 ************************************ 00:11:04.981 00:53:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:04.981 00:11:04.981 real 3m49.749s 00:11:04.981 user 9m53.995s 00:11:04.981 sys 1m7.897s 00:11:04.981 00:53:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:04.981 00:53:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:04.981 ************************************ 00:11:04.981 END TEST nvmf_target_core 00:11:04.981 ************************************ 00:11:04.981 00:53:35 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:04.981 00:53:35 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:04.981 00:53:35 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:04.981 00:53:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:04.981 ************************************ 00:11:04.981 START TEST nvmf_target_extra 00:11:04.981 ************************************ 00:11:04.981 00:53:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:05.240 * Looking for test storage... 00:11:05.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:05.240 00:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:05.240 00:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:05.240 00:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:05.240 00:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:05.240 00:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:05.240 00:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:05.240 00:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:05.240 00:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:05.240 00:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:05.240 00:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:05.240 00:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:05.240 00:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:05.240 00:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:05.240 00:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:05.240 00:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:05.241 ************************************ 00:11:05.241 START TEST nvmf_example 00:11:05.241 ************************************ 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:05.241 * Looking for test storage... 00:11:05.241 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.241 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.242 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:05.242 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.242 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:11:05.242 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:05.242 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:05.242 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:05.242 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:05.242 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:05.242 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:05.242 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:05.242 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:05.242 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:05.242 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:05.242 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:05.242 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:05.242 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:05.242 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:05.242 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:05.242 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:05.242 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:05.242 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:05.242 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:05.242 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:05.242 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:05.242 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:05.242 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:05.242 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:05.242 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.242 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:05.242 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:05.242 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:05.242 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:05.242 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:11:05.242 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:07.147 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:07.147 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:07.147 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:07.148 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:07.148 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:07.148 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:07.148 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.148 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:07.148 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:07.148 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:07.148 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:07.148 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.148 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:07.148 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:07.148 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.148 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:07.148 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.148 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:07.148 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:07.148 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:07.148 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:07.148 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.148 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:07.148 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:07.148 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.148 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:07.148 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:11:07.148 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:07.148 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:07.148 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:07.148 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:07.148 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:07.148 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:07.148 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:07.148 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:07.148 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:07.148 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:07.148 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:07.148 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:07.148 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:07.148 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:07.148 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:07.148 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:07.148 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:07.148 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:07.148 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:07.148 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:07.406 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:07.406 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:07.406 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:07.406 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:07.406 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:11:07.406 00:11:07.406 --- 10.0.0.2 ping statistics --- 00:11:07.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.406 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:11:07.406 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:07.406 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:07.406 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:11:07.406 00:11:07.406 --- 10.0.0.1 ping statistics --- 00:11:07.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.406 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:11:07.406 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:07.406 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:11:07.406 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:07.406 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:07.406 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:07.406 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:07.406 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:07.406 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:07.406 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:07.406 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:07.406 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:07.406 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:07.407 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:07.407 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:07.407 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:07.407 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1757245 00:11:07.407 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:07.407 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:07.407 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1757245 00:11:07.407 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 1757245 ']' 00:11:07.407 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.407 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:07.407 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.407 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:07.407 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:07.407 EAL: No free 2048 kB hugepages reported on node 1 00:11:08.341 00:53:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:08.341 00:53:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:11:08.341 00:53:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:08.341 00:53:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:08.341 00:53:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.341 00:53:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:08.341 00:53:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.341 00:53:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.341 00:53:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.341 00:53:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:08.341 00:53:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.341 00:53:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.341 00:53:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.341 00:53:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:08.341 00:53:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:08.341 00:53:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.341 00:53:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.341 00:53:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.341 00:53:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:08.341 00:53:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:08.341 00:53:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.341 00:53:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.341 00:53:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.341 00:53:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:08.341 00:53:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.341 00:53:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.341 00:53:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.342 00:53:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:08.342 00:53:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:08.601 EAL: No free 2048 kB hugepages reported on node 1 00:11:18.580 Initializing NVMe Controllers 00:11:18.580 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:18.580 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:18.580 Initialization complete. Launching workers. 00:11:18.580 ======================================================== 00:11:18.580 Latency(us) 00:11:18.580 Device Information : IOPS MiB/s Average min max 00:11:18.580 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15054.22 58.81 4250.89 882.94 16121.45 00:11:18.580 ======================================================== 00:11:18.580 Total : 15054.22 58.81 4250.89 882.94 16121.45 00:11:18.580 00:11:18.580 00:53:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:18.580 00:53:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:18.580 00:53:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:18.580 00:53:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:11:18.580 00:53:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:18.580 00:53:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:11:18.580 00:53:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:18.580 00:53:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:18.580 rmmod nvme_tcp 00:11:18.580 rmmod nvme_fabrics 00:11:18.839 rmmod nvme_keyring 00:11:18.839 00:53:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:18.839 00:53:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:11:18.839 00:53:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:11:18.839 00:53:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1757245 ']' 00:11:18.839 00:53:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1757245 00:11:18.839 00:53:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 1757245 ']' 00:11:18.839 00:53:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 1757245 00:11:18.839 00:53:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:11:18.839 00:53:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:18.839 00:53:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1757245 00:11:18.839 00:53:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:11:18.839 00:53:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:11:18.839 00:53:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1757245' 00:11:18.839 killing process with pid 1757245 00:11:18.839 00:53:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 1757245 00:11:18.839 00:53:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 1757245 00:11:19.097 nvmf threads initialize successfully 00:11:19.097 bdev subsystem init successfully 00:11:19.097 created a nvmf target service 00:11:19.097 create targets's poll groups done 00:11:19.097 all subsystems of target started 00:11:19.097 nvmf target is running 00:11:19.097 all subsystems of target stopped 00:11:19.097 destroy targets's poll groups done 00:11:19.097 destroyed the nvmf target service 00:11:19.097 bdev subsystem finish successfully 00:11:19.097 nvmf threads destroy successfully 00:11:19.097 00:53:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:19.097 00:53:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:19.097 00:53:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:19.097 00:53:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:19.097 00:53:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:19.098 00:53:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.098 00:53:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.098 00:53:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.004 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:21.004 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:21.004 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:21.004 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:21.004 00:11:21.004 real 0m15.891s 00:11:21.004 user 0m45.307s 00:11:21.004 sys 0m3.240s 00:11:21.004 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:21.004 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:21.004 ************************************ 00:11:21.004 END TEST nvmf_example 00:11:21.004 ************************************ 00:11:21.004 00:53:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:21.004 00:53:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:21.004 00:53:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:21.004 00:53:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:21.004 ************************************ 00:11:21.004 START TEST nvmf_filesystem 00:11:21.004 ************************************ 00:11:21.004 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:21.266 * Looking for test storage... 00:11:21.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:21.266 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:21.266 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:21.266 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:21.266 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:21.266 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:21.266 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:21.266 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:21.266 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:21.266 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:21.266 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:21.266 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:21.266 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:21.266 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:21.266 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:21.266 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:21.266 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:21.266 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:21.266 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:21.266 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:21.266 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:21.267 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:21.267 #define SPDK_CONFIG_H 00:11:21.267 #define SPDK_CONFIG_APPS 1 00:11:21.267 #define SPDK_CONFIG_ARCH native 00:11:21.267 #undef SPDK_CONFIG_ASAN 00:11:21.267 #undef SPDK_CONFIG_AVAHI 00:11:21.267 #undef SPDK_CONFIG_CET 00:11:21.267 #define SPDK_CONFIG_COVERAGE 1 00:11:21.267 #define SPDK_CONFIG_CROSS_PREFIX 00:11:21.267 #undef SPDK_CONFIG_CRYPTO 00:11:21.267 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:21.267 #undef SPDK_CONFIG_CUSTOMOCF 00:11:21.267 #undef SPDK_CONFIG_DAOS 00:11:21.267 #define SPDK_CONFIG_DAOS_DIR 00:11:21.267 #define SPDK_CONFIG_DEBUG 1 00:11:21.267 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:21.267 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:21.267 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:21.267 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:21.267 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:21.267 #undef SPDK_CONFIG_DPDK_UADK 00:11:21.268 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:21.268 #define SPDK_CONFIG_EXAMPLES 1 00:11:21.268 #undef SPDK_CONFIG_FC 00:11:21.268 #define SPDK_CONFIG_FC_PATH 00:11:21.268 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:21.268 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:21.268 #undef SPDK_CONFIG_FUSE 00:11:21.268 #undef SPDK_CONFIG_FUZZER 00:11:21.268 #define SPDK_CONFIG_FUZZER_LIB 00:11:21.268 #undef SPDK_CONFIG_GOLANG 00:11:21.268 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:21.268 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:21.268 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:21.268 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:21.268 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:21.268 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:21.268 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:21.268 #define SPDK_CONFIG_IDXD 1 00:11:21.268 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:21.268 #undef SPDK_CONFIG_IPSEC_MB 00:11:21.268 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:21.268 #define SPDK_CONFIG_ISAL 1 00:11:21.268 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:21.268 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:21.268 #define SPDK_CONFIG_LIBDIR 00:11:21.268 #undef SPDK_CONFIG_LTO 00:11:21.268 #define SPDK_CONFIG_MAX_LCORES 128 00:11:21.268 #define SPDK_CONFIG_NVME_CUSE 1 00:11:21.268 #undef SPDK_CONFIG_OCF 00:11:21.268 #define SPDK_CONFIG_OCF_PATH 00:11:21.268 #define SPDK_CONFIG_OPENSSL_PATH 00:11:21.268 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:21.268 #define SPDK_CONFIG_PGO_DIR 00:11:21.268 #undef SPDK_CONFIG_PGO_USE 00:11:21.268 #define SPDK_CONFIG_PREFIX /usr/local 00:11:21.268 #undef SPDK_CONFIG_RAID5F 00:11:21.268 #undef SPDK_CONFIG_RBD 00:11:21.268 #define SPDK_CONFIG_RDMA 1 00:11:21.268 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:21.268 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:21.268 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:21.268 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:21.268 #define SPDK_CONFIG_SHARED 1 00:11:21.268 #undef SPDK_CONFIG_SMA 00:11:21.268 #define SPDK_CONFIG_TESTS 1 00:11:21.268 #undef SPDK_CONFIG_TSAN 00:11:21.268 #define SPDK_CONFIG_UBLK 1 00:11:21.268 #define SPDK_CONFIG_UBSAN 1 00:11:21.268 #undef SPDK_CONFIG_UNIT_TESTS 00:11:21.268 #undef SPDK_CONFIG_URING 00:11:21.268 #define SPDK_CONFIG_URING_PATH 00:11:21.268 #undef SPDK_CONFIG_URING_ZNS 00:11:21.268 #undef SPDK_CONFIG_USDT 00:11:21.268 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:21.268 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:21.268 #define SPDK_CONFIG_VFIO_USER 1 00:11:21.268 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:21.268 #define SPDK_CONFIG_VHOST 1 00:11:21.268 #define SPDK_CONFIG_VIRTIO 1 00:11:21.268 #undef SPDK_CONFIG_VTUNE 00:11:21.268 #define SPDK_CONFIG_VTUNE_DIR 00:11:21.268 #define SPDK_CONFIG_WERROR 1 00:11:21.268 #define SPDK_CONFIG_WPDK_DIR 00:11:21.268 #undef SPDK_CONFIG_XNVME 00:11:21.268 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:21.268 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : v22.11.4 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:21.269 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j48 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 1759015 ]] 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 1759015 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.GdtqD5 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.GdtqD5/tests/target /tmp/spdk.GdtqD5 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:11:21.270 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=953643008 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4330786816 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=53464088576 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=61994713088 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=8530624512 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30935175168 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30997356544 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=62181376 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=12376535040 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=12398944256 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=22409216 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30996779008 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30997356544 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=577536 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=6199463936 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=6199468032 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:11:21.271 * Looking for test storage... 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=53464088576 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=10745217024 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:21.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:21.271 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:21.272 00:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:23.178 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:23.178 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:23.178 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:23.178 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:23.178 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:23.436 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:23.436 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:23.436 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:23.436 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:23.436 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:23.436 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:23.436 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:23.436 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:23.436 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:11:23.436 00:11:23.436 --- 10.0.0.2 ping statistics --- 00:11:23.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.436 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:11:23.436 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:23.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:23.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:11:23.436 00:11:23.436 --- 10.0.0.1 ping statistics --- 00:11:23.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.436 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:11:23.436 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:23.436 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:11:23.436 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:23.436 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:23.436 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:23.436 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:23.436 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:23.436 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:23.436 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:23.436 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:23.436 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:23.436 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:23.436 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:23.436 ************************************ 00:11:23.436 START TEST nvmf_filesystem_no_in_capsule 00:11:23.436 ************************************ 00:11:23.436 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:11:23.436 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:23.436 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:23.436 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:23.436 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:23.436 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:23.436 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1760643 00:11:23.436 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:23.436 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1760643 00:11:23.436 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1760643 ']' 00:11:23.436 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.436 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:23.436 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.436 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:23.436 00:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:23.436 [2024-07-26 00:53:53.815224] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:11:23.436 [2024-07-26 00:53:53.815311] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:23.436 EAL: No free 2048 kB hugepages reported on node 1 00:11:23.693 [2024-07-26 00:53:53.879163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:23.693 [2024-07-26 00:53:53.964520] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:23.693 [2024-07-26 00:53:53.964570] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:23.694 [2024-07-26 00:53:53.964598] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:23.694 [2024-07-26 00:53:53.964609] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:23.694 [2024-07-26 00:53:53.964618] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:23.694 [2024-07-26 00:53:53.964747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:23.694 [2024-07-26 00:53:53.964818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:23.694 [2024-07-26 00:53:53.964879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:23.694 [2024-07-26 00:53:53.964881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.694 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:23.694 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:23.694 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:23.694 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:23.694 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:23.694 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:23.694 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:23.694 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:23.694 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.694 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:23.694 [2024-07-26 00:53:54.109289] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:23.694 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.694 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:23.694 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.694 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:23.951 Malloc1 00:11:23.951 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.951 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:23.951 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.951 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:23.951 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.951 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:23.951 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.951 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:23.951 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.951 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:23.951 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.951 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:23.951 [2024-07-26 00:53:54.294122] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:23.951 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.951 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:23.951 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:23.951 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:23.951 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:23.951 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:23.951 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:23.951 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.951 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:23.951 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.951 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:23.951 { 00:11:23.951 "name": "Malloc1", 00:11:23.951 "aliases": [ 00:11:23.951 "87fb5af9-a5a8-4a63-93a0-7ab8be8b9bc1" 00:11:23.951 ], 00:11:23.951 "product_name": "Malloc disk", 00:11:23.951 "block_size": 512, 00:11:23.951 "num_blocks": 1048576, 00:11:23.951 "uuid": "87fb5af9-a5a8-4a63-93a0-7ab8be8b9bc1", 00:11:23.951 "assigned_rate_limits": { 00:11:23.951 "rw_ios_per_sec": 0, 00:11:23.951 "rw_mbytes_per_sec": 0, 00:11:23.951 "r_mbytes_per_sec": 0, 00:11:23.951 "w_mbytes_per_sec": 0 00:11:23.951 }, 00:11:23.951 "claimed": true, 00:11:23.951 "claim_type": "exclusive_write", 00:11:23.951 "zoned": false, 00:11:23.951 "supported_io_types": { 00:11:23.951 "read": true, 00:11:23.951 "write": true, 00:11:23.951 "unmap": true, 00:11:23.951 "flush": true, 00:11:23.951 "reset": true, 00:11:23.951 "nvme_admin": false, 00:11:23.951 "nvme_io": false, 00:11:23.951 "nvme_io_md": false, 00:11:23.951 "write_zeroes": true, 00:11:23.951 "zcopy": true, 00:11:23.951 "get_zone_info": false, 00:11:23.951 "zone_management": false, 00:11:23.951 "zone_append": false, 00:11:23.951 "compare": false, 00:11:23.951 "compare_and_write": false, 00:11:23.951 "abort": true, 00:11:23.951 "seek_hole": false, 00:11:23.951 "seek_data": false, 00:11:23.951 "copy": true, 00:11:23.951 "nvme_iov_md": false 00:11:23.951 }, 00:11:23.951 "memory_domains": [ 00:11:23.951 { 00:11:23.951 "dma_device_id": "system", 00:11:23.951 "dma_device_type": 1 00:11:23.951 }, 00:11:23.951 { 00:11:23.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.951 "dma_device_type": 2 00:11:23.951 } 00:11:23.951 ], 00:11:23.951 "driver_specific": {} 00:11:23.951 } 00:11:23.951 ]' 00:11:23.951 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:23.951 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:23.951 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:24.209 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:24.209 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:24.209 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:24.209 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:24.209 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:24.777 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:24.777 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:24.777 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:24.777 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:24.777 00:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:26.706 00:53:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:26.706 00:53:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:26.706 00:53:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:26.706 00:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:26.706 00:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:26.706 00:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:26.706 00:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:26.706 00:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:26.706 00:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:26.706 00:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:26.706 00:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:26.706 00:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:26.706 00:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:26.706 00:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:26.706 00:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:26.706 00:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:26.706 00:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:26.965 00:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:27.532 00:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:28.518 00:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:28.518 00:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:28.518 00:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:28.518 00:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:28.518 00:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:28.776 ************************************ 00:11:28.776 START TEST filesystem_ext4 00:11:28.776 ************************************ 00:11:28.776 00:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:28.776 00:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:28.776 00:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:28.776 00:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:28.776 00:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:28.776 00:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:28.776 00:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:28.776 00:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:28.776 00:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:28.776 00:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:28.776 00:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:28.776 mke2fs 1.46.5 (30-Dec-2021) 00:11:28.776 Discarding device blocks: 0/522240 done 00:11:28.776 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:28.776 Filesystem UUID: 1ce39149-6edf-4ec3-b37c-066965050d5d 00:11:28.776 Superblock backups stored on blocks: 00:11:28.776 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:28.776 00:11:28.776 Allocating group tables: 0/64 done 00:11:28.776 Writing inode tables: 0/64 done 00:11:31.310 Creating journal (8192 blocks): done 00:11:31.878 Writing superblocks and filesystem accounting information: 0/6426/64 done 00:11:31.878 00:11:31.878 00:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:31.878 00:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:32.136 00:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:32.136 00:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:32.136 00:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:32.136 00:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:32.136 00:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:32.136 00:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:32.136 00:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1760643 00:11:32.136 00:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:32.136 00:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:32.136 00:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:32.136 00:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:32.136 00:11:32.136 real 0m3.579s 00:11:32.136 user 0m0.016s 00:11:32.136 sys 0m0.063s 00:11:32.136 00:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:32.136 00:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:32.136 ************************************ 00:11:32.136 END TEST filesystem_ext4 00:11:32.136 ************************************ 00:11:32.136 00:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:32.136 00:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:32.136 00:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:32.136 00:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.395 ************************************ 00:11:32.395 START TEST filesystem_btrfs 00:11:32.395 ************************************ 00:11:32.395 00:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:32.395 00:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:32.395 00:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:32.395 00:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:32.395 00:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:32.395 00:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:32.395 00:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:32.395 00:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:32.395 00:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:32.395 00:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:32.395 00:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:32.395 btrfs-progs v6.6.2 00:11:32.395 See https://btrfs.readthedocs.io for more information. 00:11:32.395 00:11:32.395 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:32.395 NOTE: several default settings have changed in version 5.15, please make sure 00:11:32.395 this does not affect your deployments: 00:11:32.395 - DUP for metadata (-m dup) 00:11:32.395 - enabled no-holes (-O no-holes) 00:11:32.395 - enabled free-space-tree (-R free-space-tree) 00:11:32.395 00:11:32.395 Label: (null) 00:11:32.395 UUID: 2d0f5f54-dd4b-4b5c-a84a-23e9f5d90fa0 00:11:32.395 Node size: 16384 00:11:32.395 Sector size: 4096 00:11:32.395 Filesystem size: 510.00MiB 00:11:32.395 Block group profiles: 00:11:32.395 Data: single 8.00MiB 00:11:32.395 Metadata: DUP 32.00MiB 00:11:32.395 System: DUP 8.00MiB 00:11:32.395 SSD detected: yes 00:11:32.395 Zoned device: no 00:11:32.395 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:32.395 Runtime features: free-space-tree 00:11:32.395 Checksum: crc32c 00:11:32.395 Number of devices: 1 00:11:32.395 Devices: 00:11:32.395 ID SIZE PATH 00:11:32.395 1 510.00MiB /dev/nvme0n1p1 00:11:32.395 00:11:32.395 00:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:32.395 00:54:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:33.331 00:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:33.331 00:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:33.331 00:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:33.331 00:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:33.331 00:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:33.331 00:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:33.331 00:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1760643 00:11:33.331 00:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:33.331 00:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:33.331 00:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:33.331 00:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:33.331 00:11:33.331 real 0m1.031s 00:11:33.331 user 0m0.014s 00:11:33.331 sys 0m0.119s 00:11:33.331 00:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:33.331 00:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:33.331 ************************************ 00:11:33.331 END TEST filesystem_btrfs 00:11:33.331 ************************************ 00:11:33.331 00:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:33.331 00:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:33.331 00:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:33.331 00:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.331 ************************************ 00:11:33.331 START TEST filesystem_xfs 00:11:33.331 ************************************ 00:11:33.331 00:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:33.331 00:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:33.332 00:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:33.332 00:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:33.332 00:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:33.332 00:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:33.332 00:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:33.332 00:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:11:33.332 00:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:33.332 00:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:33.332 00:54:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:33.332 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:33.332 = sectsz=512 attr=2, projid32bit=1 00:11:33.332 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:33.332 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:33.332 data = bsize=4096 blocks=130560, imaxpct=25 00:11:33.332 = sunit=0 swidth=0 blks 00:11:33.332 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:33.332 log =internal log bsize=4096 blocks=16384, version=2 00:11:33.332 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:33.332 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:34.267 Discarding blocks...Done. 00:11:34.267 00:54:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:34.267 00:54:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:36.795 00:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:36.795 00:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:36.795 00:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:36.795 00:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:36.795 00:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:36.795 00:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:36.795 00:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1760643 00:11:36.795 00:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:36.795 00:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:36.795 00:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:36.795 00:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:36.795 00:11:36.795 real 0m3.190s 00:11:36.795 user 0m0.019s 00:11:36.795 sys 0m0.060s 00:11:36.795 00:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:36.795 00:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:36.795 ************************************ 00:11:36.795 END TEST filesystem_xfs 00:11:36.795 ************************************ 00:11:36.795 00:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:36.795 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:36.795 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:36.795 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.796 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:36.796 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:36.796 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:36.796 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:36.796 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:36.796 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:36.796 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:36.796 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:36.796 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.796 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.796 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.796 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:36.796 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1760643 00:11:36.796 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1760643 ']' 00:11:36.796 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1760643 00:11:36.796 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:36.796 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:36.796 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1760643 00:11:36.796 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:36.796 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:36.796 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1760643' 00:11:36.796 killing process with pid 1760643 00:11:36.796 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 1760643 00:11:36.796 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 1760643 00:11:37.364 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:37.364 00:11:37.364 real 0m13.877s 00:11:37.364 user 0m53.507s 00:11:37.364 sys 0m1.900s 00:11:37.364 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:37.364 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.364 ************************************ 00:11:37.364 END TEST nvmf_filesystem_no_in_capsule 00:11:37.364 ************************************ 00:11:37.364 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:37.364 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:37.364 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:37.364 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:37.364 ************************************ 00:11:37.364 START TEST nvmf_filesystem_in_capsule 00:11:37.364 ************************************ 00:11:37.364 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:11:37.364 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:37.364 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:37.364 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:37.364 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:37.364 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.364 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1762893 00:11:37.364 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:37.364 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1762893 00:11:37.364 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1762893 ']' 00:11:37.364 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.364 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:37.364 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.364 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:37.364 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.364 [2024-07-26 00:54:07.750309] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:11:37.364 [2024-07-26 00:54:07.750409] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:37.364 EAL: No free 2048 kB hugepages reported on node 1 00:11:37.622 [2024-07-26 00:54:07.816254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:37.622 [2024-07-26 00:54:07.909355] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:37.622 [2024-07-26 00:54:07.909417] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:37.622 [2024-07-26 00:54:07.909434] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:37.622 [2024-07-26 00:54:07.909447] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:37.622 [2024-07-26 00:54:07.909459] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:37.622 [2024-07-26 00:54:07.909527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.622 [2024-07-26 00:54:07.909580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:37.622 [2024-07-26 00:54:07.909699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:37.622 [2024-07-26 00:54:07.909701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.622 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:37.622 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:37.622 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:37.622 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:37.622 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.880 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:37.880 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:37.880 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:37.880 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.880 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.880 [2024-07-26 00:54:08.066574] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:37.880 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.880 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:37.880 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.880 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.880 Malloc1 00:11:37.880 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.880 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:37.880 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.880 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.880 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.880 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:37.880 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.880 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.880 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.880 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:37.880 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.880 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.880 [2024-07-26 00:54:08.241224] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:37.880 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.880 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:37.880 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:37.880 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:37.880 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:37.880 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:37.880 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:37.880 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.880 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.880 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.880 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:37.880 { 00:11:37.880 "name": "Malloc1", 00:11:37.880 "aliases": [ 00:11:37.880 "d68ead05-933e-450e-830c-48cadca404e0" 00:11:37.880 ], 00:11:37.880 "product_name": "Malloc disk", 00:11:37.880 "block_size": 512, 00:11:37.880 "num_blocks": 1048576, 00:11:37.880 "uuid": "d68ead05-933e-450e-830c-48cadca404e0", 00:11:37.880 "assigned_rate_limits": { 00:11:37.880 "rw_ios_per_sec": 0, 00:11:37.880 "rw_mbytes_per_sec": 0, 00:11:37.880 "r_mbytes_per_sec": 0, 00:11:37.880 "w_mbytes_per_sec": 0 00:11:37.880 }, 00:11:37.880 "claimed": true, 00:11:37.880 "claim_type": "exclusive_write", 00:11:37.880 "zoned": false, 00:11:37.880 "supported_io_types": { 00:11:37.880 "read": true, 00:11:37.880 "write": true, 00:11:37.880 "unmap": true, 00:11:37.880 "flush": true, 00:11:37.880 "reset": true, 00:11:37.880 "nvme_admin": false, 00:11:37.880 "nvme_io": false, 00:11:37.880 "nvme_io_md": false, 00:11:37.880 "write_zeroes": true, 00:11:37.880 "zcopy": true, 00:11:37.880 "get_zone_info": false, 00:11:37.880 "zone_management": false, 00:11:37.880 "zone_append": false, 00:11:37.880 "compare": false, 00:11:37.880 "compare_and_write": false, 00:11:37.880 "abort": true, 00:11:37.880 "seek_hole": false, 00:11:37.880 "seek_data": false, 00:11:37.880 "copy": true, 00:11:37.880 "nvme_iov_md": false 00:11:37.880 }, 00:11:37.880 "memory_domains": [ 00:11:37.880 { 00:11:37.880 "dma_device_id": "system", 00:11:37.880 "dma_device_type": 1 00:11:37.880 }, 00:11:37.880 { 00:11:37.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.880 "dma_device_type": 2 00:11:37.880 } 00:11:37.880 ], 00:11:37.880 "driver_specific": {} 00:11:37.880 } 00:11:37.880 ]' 00:11:37.880 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:37.880 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:37.880 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:38.139 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:38.139 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:38.139 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:38.139 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:38.139 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:38.704 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:38.704 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:38.704 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:38.704 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:38.704 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:40.603 00:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:40.603 00:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:40.603 00:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:40.861 00:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:40.861 00:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:40.861 00:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:40.861 00:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:40.861 00:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:40.861 00:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:40.861 00:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:40.861 00:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:40.861 00:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:40.861 00:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:40.861 00:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:40.861 00:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:40.861 00:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:40.861 00:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:40.861 00:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:41.430 00:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:42.367 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:42.367 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:42.367 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:42.367 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:42.367 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:42.367 ************************************ 00:11:42.367 START TEST filesystem_in_capsule_ext4 00:11:42.367 ************************************ 00:11:42.367 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:42.367 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:42.367 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:42.367 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:42.367 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:42.367 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:42.367 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:42.367 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:42.367 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:42.367 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:42.367 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:42.367 mke2fs 1.46.5 (30-Dec-2021) 00:11:42.367 Discarding device blocks: 0/522240 done 00:11:42.367 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:42.367 Filesystem UUID: d2d78bbe-fa8d-4e3c-a6dd-34880d5238d6 00:11:42.367 Superblock backups stored on blocks: 00:11:42.367 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:42.367 00:11:42.367 Allocating group tables: 0/64 done 00:11:42.367 Writing inode tables: 0/64 done 00:11:43.745 Creating journal (8192 blocks): done 00:11:44.314 Writing superblocks and filesystem accounting information: 0/64 1/64 done 00:11:44.314 00:11:44.314 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:44.314 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:44.881 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:44.881 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:44.881 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:44.881 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:44.881 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:44.881 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:44.881 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1762893 00:11:44.881 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:44.881 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:44.881 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:44.881 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:44.881 00:11:44.881 real 0m2.654s 00:11:44.881 user 0m0.018s 00:11:44.881 sys 0m0.055s 00:11:44.881 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:44.881 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:44.881 ************************************ 00:11:44.881 END TEST filesystem_in_capsule_ext4 00:11:44.881 ************************************ 00:11:44.881 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:44.881 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:44.881 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:44.881 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.141 ************************************ 00:11:45.141 START TEST filesystem_in_capsule_btrfs 00:11:45.141 ************************************ 00:11:45.141 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:45.141 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:45.141 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:45.141 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:45.141 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:45.141 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:45.141 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:45.141 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:45.141 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:45.141 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:45.141 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:45.141 btrfs-progs v6.6.2 00:11:45.141 See https://btrfs.readthedocs.io for more information. 00:11:45.141 00:11:45.141 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:45.141 NOTE: several default settings have changed in version 5.15, please make sure 00:11:45.141 this does not affect your deployments: 00:11:45.141 - DUP for metadata (-m dup) 00:11:45.141 - enabled no-holes (-O no-holes) 00:11:45.141 - enabled free-space-tree (-R free-space-tree) 00:11:45.141 00:11:45.141 Label: (null) 00:11:45.141 UUID: f2165286-3969-4283-b358-92ecdfab30c5 00:11:45.141 Node size: 16384 00:11:45.141 Sector size: 4096 00:11:45.141 Filesystem size: 510.00MiB 00:11:45.141 Block group profiles: 00:11:45.141 Data: single 8.00MiB 00:11:45.141 Metadata: DUP 32.00MiB 00:11:45.141 System: DUP 8.00MiB 00:11:45.141 SSD detected: yes 00:11:45.141 Zoned device: no 00:11:45.141 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:45.141 Runtime features: free-space-tree 00:11:45.141 Checksum: crc32c 00:11:45.141 Number of devices: 1 00:11:45.141 Devices: 00:11:45.141 ID SIZE PATH 00:11:45.141 1 510.00MiB /dev/nvme0n1p1 00:11:45.141 00:11:45.141 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:45.141 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:46.078 00:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:46.078 00:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:46.078 00:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:46.078 00:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:46.078 00:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:46.078 00:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:46.078 00:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1762893 00:11:46.078 00:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:46.078 00:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:46.078 00:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:46.078 00:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:46.078 00:11:46.078 real 0m1.133s 00:11:46.078 user 0m0.018s 00:11:46.078 sys 0m0.111s 00:11:46.078 00:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:46.078 00:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:46.078 ************************************ 00:11:46.078 END TEST filesystem_in_capsule_btrfs 00:11:46.078 ************************************ 00:11:46.078 00:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:46.078 00:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:46.078 00:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:46.078 00:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.078 ************************************ 00:11:46.078 START TEST filesystem_in_capsule_xfs 00:11:46.078 ************************************ 00:11:46.078 00:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:46.078 00:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:46.078 00:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:46.078 00:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:46.078 00:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:46.078 00:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:46.078 00:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:46.078 00:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:11:46.078 00:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:46.078 00:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:46.078 00:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:46.338 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:46.338 = sectsz=512 attr=2, projid32bit=1 00:11:46.338 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:46.338 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:46.338 data = bsize=4096 blocks=130560, imaxpct=25 00:11:46.338 = sunit=0 swidth=0 blks 00:11:46.338 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:46.338 log =internal log bsize=4096 blocks=16384, version=2 00:11:46.338 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:46.338 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:47.274 Discarding blocks...Done. 00:11:47.274 00:54:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:47.274 00:54:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:49.180 00:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:49.180 00:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:49.180 00:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:49.180 00:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:49.180 00:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:49.180 00:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:49.180 00:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1762893 00:11:49.180 00:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:49.181 00:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:49.181 00:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:49.181 00:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:49.181 00:11:49.181 real 0m2.939s 00:11:49.181 user 0m0.019s 00:11:49.181 sys 0m0.057s 00:11:49.181 00:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:49.181 00:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:49.181 ************************************ 00:11:49.181 END TEST filesystem_in_capsule_xfs 00:11:49.181 ************************************ 00:11:49.181 00:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:49.181 00:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:49.181 00:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:49.181 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.181 00:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:49.181 00:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:49.181 00:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:49.181 00:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.441 00:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:49.441 00:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.441 00:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:49.441 00:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.441 00:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.441 00:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.441 00:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.441 00:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:49.441 00:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1762893 00:11:49.441 00:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1762893 ']' 00:11:49.441 00:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1762893 00:11:49.441 00:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:49.441 00:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:49.441 00:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1762893 00:11:49.441 00:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:49.441 00:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:49.441 00:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1762893' 00:11:49.441 killing process with pid 1762893 00:11:49.441 00:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 1762893 00:11:49.441 00:54:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 1762893 00:11:49.700 00:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:49.700 00:11:49.700 real 0m12.409s 00:11:49.700 user 0m47.578s 00:11:49.700 sys 0m1.874s 00:11:49.700 00:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:49.700 00:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.700 ************************************ 00:11:49.700 END TEST nvmf_filesystem_in_capsule 00:11:49.700 ************************************ 00:11:49.959 00:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:49.959 00:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:49.959 00:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:11:49.959 00:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:49.959 00:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:11:49.960 00:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:49.960 00:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:49.960 rmmod nvme_tcp 00:11:49.960 rmmod nvme_fabrics 00:11:49.960 rmmod nvme_keyring 00:11:49.960 00:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:49.960 00:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:11:49.960 00:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:11:49.960 00:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:11:49.960 00:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:49.960 00:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:49.960 00:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:49.960 00:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:49.960 00:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:49.960 00:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.960 00:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:49.960 00:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:51.869 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:51.869 00:11:51.869 real 0m30.833s 00:11:51.869 user 1m42.016s 00:11:51.869 sys 0m5.391s 00:11:51.869 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:51.869 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:51.869 ************************************ 00:11:51.869 END TEST nvmf_filesystem 00:11:51.869 ************************************ 00:11:51.869 00:54:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:51.869 00:54:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:51.869 00:54:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:51.869 00:54:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:51.869 ************************************ 00:11:51.869 START TEST nvmf_target_discovery 00:11:51.869 ************************************ 00:11:51.869 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:52.128 * Looking for test storage... 00:11:52.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:52.128 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:52.128 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:52.128 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:52.128 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:52.128 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:52.128 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:52.128 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:52.128 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:52.128 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:52.128 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:52.128 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:52.128 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:52.128 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:52.128 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:52.128 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:52.128 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:52.128 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:52.128 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:52.128 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:52.128 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:52.128 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:52.128 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:52.128 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.128 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.128 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.129 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:52.129 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.129 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:11:52.129 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:52.129 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:52.129 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:52.129 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:52.129 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:52.129 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:52.129 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:52.129 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:52.129 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:52.129 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:52.129 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:52.129 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:52.129 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:52.129 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:52.129 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:52.129 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:52.129 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:52.129 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:52.129 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.129 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:52.129 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.129 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:52.129 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:52.129 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:11:52.129 00:54:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:54.045 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:54.045 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:54.045 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:54.046 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:54.046 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:54.046 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:54.046 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:11:54.046 00:11:54.046 --- 10.0.0.2 ping statistics --- 00:11:54.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.046 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:54.046 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:54.046 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:11:54.046 00:11:54.046 --- 10.0.0.1 ping statistics --- 00:11:54.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.046 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:54.046 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:54.306 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:54.306 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:54.306 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:54.306 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.306 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1766680 00:11:54.306 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:54.306 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1766680 00:11:54.306 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 1766680 ']' 00:11:54.306 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.306 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:54.306 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.306 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:54.306 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.306 [2024-07-26 00:54:24.518656] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:11:54.306 [2024-07-26 00:54:24.518725] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:54.306 EAL: No free 2048 kB hugepages reported on node 1 00:11:54.306 [2024-07-26 00:54:24.583980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:54.306 [2024-07-26 00:54:24.674490] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:54.306 [2024-07-26 00:54:24.674552] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:54.306 [2024-07-26 00:54:24.674581] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:54.306 [2024-07-26 00:54:24.674593] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:54.306 [2024-07-26 00:54:24.674603] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:54.306 [2024-07-26 00:54:24.674686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:54.306 [2024-07-26 00:54:24.674711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:54.306 [2024-07-26 00:54:24.674768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:54.306 [2024-07-26 00:54:24.674771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.566 [2024-07-26 00:54:24.831609] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.566 Null1 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.566 [2024-07-26 00:54:24.871914] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.566 Null2 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.566 Null3 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.566 Null4 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.566 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:11:54.826 00:11:54.826 Discovery Log Number of Records 6, Generation counter 6 00:11:54.826 =====Discovery Log Entry 0====== 00:11:54.826 trtype: tcp 00:11:54.826 adrfam: ipv4 00:11:54.826 subtype: current discovery subsystem 00:11:54.826 treq: not required 00:11:54.826 portid: 0 00:11:54.826 trsvcid: 4420 00:11:54.826 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:54.826 traddr: 10.0.0.2 00:11:54.826 eflags: explicit discovery connections, duplicate discovery information 00:11:54.826 sectype: none 00:11:54.826 =====Discovery Log Entry 1====== 00:11:54.826 trtype: tcp 00:11:54.826 adrfam: ipv4 00:11:54.826 subtype: nvme subsystem 00:11:54.826 treq: not required 00:11:54.826 portid: 0 00:11:54.826 trsvcid: 4420 00:11:54.826 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:54.826 traddr: 10.0.0.2 00:11:54.826 eflags: none 00:11:54.826 sectype: none 00:11:54.826 =====Discovery Log Entry 2====== 00:11:54.826 trtype: tcp 00:11:54.826 adrfam: ipv4 00:11:54.826 subtype: nvme subsystem 00:11:54.826 treq: not required 00:11:54.826 portid: 0 00:11:54.826 trsvcid: 4420 00:11:54.826 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:54.826 traddr: 10.0.0.2 00:11:54.826 eflags: none 00:11:54.826 sectype: none 00:11:54.826 =====Discovery Log Entry 3====== 00:11:54.826 trtype: tcp 00:11:54.826 adrfam: ipv4 00:11:54.826 subtype: nvme subsystem 00:11:54.826 treq: not required 00:11:54.826 portid: 0 00:11:54.826 trsvcid: 4420 00:11:54.826 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:54.826 traddr: 10.0.0.2 00:11:54.826 eflags: none 00:11:54.826 sectype: none 00:11:54.826 =====Discovery Log Entry 4====== 00:11:54.826 trtype: tcp 00:11:54.826 adrfam: ipv4 00:11:54.826 subtype: nvme subsystem 00:11:54.826 treq: not required 00:11:54.826 portid: 0 00:11:54.826 trsvcid: 4420 00:11:54.826 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:54.826 traddr: 10.0.0.2 00:11:54.826 eflags: none 00:11:54.826 sectype: none 00:11:54.826 =====Discovery Log Entry 5====== 00:11:54.826 trtype: tcp 00:11:54.827 adrfam: ipv4 00:11:54.827 subtype: discovery subsystem referral 00:11:54.827 treq: not required 00:11:54.827 portid: 0 00:11:54.827 trsvcid: 4430 00:11:54.827 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:54.827 traddr: 10.0.0.2 00:11:54.827 eflags: none 00:11:54.827 sectype: none 00:11:54.827 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:54.827 Perform nvmf subsystem discovery via RPC 00:11:54.827 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:54.827 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.827 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.827 [ 00:11:54.827 { 00:11:54.827 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:54.827 "subtype": "Discovery", 00:11:54.827 "listen_addresses": [ 00:11:54.827 { 00:11:54.827 "trtype": "TCP", 00:11:54.827 "adrfam": "IPv4", 00:11:54.827 "traddr": "10.0.0.2", 00:11:54.827 "trsvcid": "4420" 00:11:54.827 } 00:11:54.827 ], 00:11:54.827 "allow_any_host": true, 00:11:54.827 "hosts": [] 00:11:54.827 }, 00:11:54.827 { 00:11:54.827 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:54.827 "subtype": "NVMe", 00:11:54.827 "listen_addresses": [ 00:11:54.827 { 00:11:54.827 "trtype": "TCP", 00:11:54.827 "adrfam": "IPv4", 00:11:54.827 "traddr": "10.0.0.2", 00:11:54.827 "trsvcid": "4420" 00:11:54.827 } 00:11:54.827 ], 00:11:54.827 "allow_any_host": true, 00:11:54.827 "hosts": [], 00:11:54.827 "serial_number": "SPDK00000000000001", 00:11:54.827 "model_number": "SPDK bdev Controller", 00:11:54.827 "max_namespaces": 32, 00:11:54.827 "min_cntlid": 1, 00:11:54.827 "max_cntlid": 65519, 00:11:54.827 "namespaces": [ 00:11:54.827 { 00:11:54.827 "nsid": 1, 00:11:54.827 "bdev_name": "Null1", 00:11:54.827 "name": "Null1", 00:11:54.827 "nguid": "39F71B8C94374FDFAD2F332330B7C372", 00:11:54.827 "uuid": "39f71b8c-9437-4fdf-ad2f-332330b7c372" 00:11:54.827 } 00:11:54.827 ] 00:11:54.827 }, 00:11:54.827 { 00:11:54.827 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:54.827 "subtype": "NVMe", 00:11:54.827 "listen_addresses": [ 00:11:54.827 { 00:11:54.827 "trtype": "TCP", 00:11:54.827 "adrfam": "IPv4", 00:11:54.827 "traddr": "10.0.0.2", 00:11:54.827 "trsvcid": "4420" 00:11:54.827 } 00:11:54.827 ], 00:11:54.827 "allow_any_host": true, 00:11:54.827 "hosts": [], 00:11:54.827 "serial_number": "SPDK00000000000002", 00:11:54.827 "model_number": "SPDK bdev Controller", 00:11:54.827 "max_namespaces": 32, 00:11:54.827 "min_cntlid": 1, 00:11:54.827 "max_cntlid": 65519, 00:11:54.827 "namespaces": [ 00:11:54.827 { 00:11:54.827 "nsid": 1, 00:11:54.827 "bdev_name": "Null2", 00:11:54.827 "name": "Null2", 00:11:54.827 "nguid": "89C07162F67E4F588737B5C66C4A0F36", 00:11:54.827 "uuid": "89c07162-f67e-4f58-8737-b5c66c4a0f36" 00:11:54.827 } 00:11:54.827 ] 00:11:54.827 }, 00:11:54.827 { 00:11:54.827 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:54.827 "subtype": "NVMe", 00:11:54.827 "listen_addresses": [ 00:11:54.827 { 00:11:54.827 "trtype": "TCP", 00:11:54.827 "adrfam": "IPv4", 00:11:54.827 "traddr": "10.0.0.2", 00:11:54.827 "trsvcid": "4420" 00:11:54.827 } 00:11:54.827 ], 00:11:54.827 "allow_any_host": true, 00:11:54.827 "hosts": [], 00:11:54.827 "serial_number": "SPDK00000000000003", 00:11:54.827 "model_number": "SPDK bdev Controller", 00:11:54.827 "max_namespaces": 32, 00:11:54.827 "min_cntlid": 1, 00:11:54.827 "max_cntlid": 65519, 00:11:54.827 "namespaces": [ 00:11:54.827 { 00:11:54.827 "nsid": 1, 00:11:54.827 "bdev_name": "Null3", 00:11:54.827 "name": "Null3", 00:11:54.827 "nguid": "CCF030D8E19B455F9C12BD9CA6AAFC89", 00:11:54.827 "uuid": "ccf030d8-e19b-455f-9c12-bd9ca6aafc89" 00:11:54.827 } 00:11:54.827 ] 00:11:54.827 }, 00:11:54.827 { 00:11:54.827 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:54.827 "subtype": "NVMe", 00:11:54.827 "listen_addresses": [ 00:11:54.827 { 00:11:54.827 "trtype": "TCP", 00:11:54.827 "adrfam": "IPv4", 00:11:54.827 "traddr": "10.0.0.2", 00:11:54.827 "trsvcid": "4420" 00:11:54.827 } 00:11:54.827 ], 00:11:54.827 "allow_any_host": true, 00:11:54.827 "hosts": [], 00:11:54.827 "serial_number": "SPDK00000000000004", 00:11:54.827 "model_number": "SPDK bdev Controller", 00:11:54.827 "max_namespaces": 32, 00:11:54.827 "min_cntlid": 1, 00:11:54.827 "max_cntlid": 65519, 00:11:54.827 "namespaces": [ 00:11:54.827 { 00:11:54.827 "nsid": 1, 00:11:54.827 "bdev_name": "Null4", 00:11:54.827 "name": "Null4", 00:11:54.827 "nguid": "D209B1F5BEF24CBBB27BF491B85B5E15", 00:11:54.827 "uuid": "d209b1f5-bef2-4cbb-b27b-f491b85b5e15" 00:11:54.827 } 00:11:54.827 ] 00:11:54.827 } 00:11:54.827 ] 00:11:54.827 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.827 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:54.827 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:54.827 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:54.827 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.827 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.827 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.827 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:54.827 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.827 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.827 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.827 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:54.827 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:54.827 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.827 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.827 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.827 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:54.827 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.827 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:55.088 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.088 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:55.088 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:55.088 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.088 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:55.088 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.088 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:55.088 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.088 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:55.088 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.088 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:55.088 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:55.088 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.088 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:55.088 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.088 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:55.088 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.088 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:55.088 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.088 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:55.088 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.088 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:55.088 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.088 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:55.088 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.089 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:55.089 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:55.089 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.089 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:55.089 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:55.089 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:55.089 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:55.089 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:55.089 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:11:55.089 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:55.089 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:11:55.089 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:55.089 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:55.089 rmmod nvme_tcp 00:11:55.089 rmmod nvme_fabrics 00:11:55.089 rmmod nvme_keyring 00:11:55.089 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:55.089 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:11:55.089 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:11:55.089 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1766680 ']' 00:11:55.089 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1766680 00:11:55.089 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 1766680 ']' 00:11:55.089 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 1766680 00:11:55.089 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:11:55.089 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:55.089 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1766680 00:11:55.089 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:55.089 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:55.089 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1766680' 00:11:55.089 killing process with pid 1766680 00:11:55.089 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 1766680 00:11:55.089 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 1766680 00:11:55.348 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:55.348 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:55.348 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:55.348 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:55.348 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:55.348 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.348 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:55.348 00:54:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.884 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:57.884 00:11:57.884 real 0m5.413s 00:11:57.884 user 0m4.618s 00:11:57.884 sys 0m1.806s 00:11:57.884 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:57.884 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.884 ************************************ 00:11:57.884 END TEST nvmf_target_discovery 00:11:57.884 ************************************ 00:11:57.884 00:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:57.884 00:54:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:57.884 00:54:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:57.884 00:54:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:57.884 ************************************ 00:11:57.884 START TEST nvmf_referrals 00:11:57.884 ************************************ 00:11:57.884 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:57.884 * Looking for test storage... 00:11:57.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:57.884 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:57.884 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:57.884 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:57.884 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:57.884 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:57.884 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:57.884 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:57.884 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:57.884 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:57.884 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:57.884 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:57.884 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:57.884 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:57.884 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:57.884 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:57.884 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:57.884 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:57.884 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:57.885 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:57.885 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:57.885 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:57.885 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:57.885 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.885 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.885 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.885 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:57.885 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.885 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:11:57.885 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:57.885 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:57.885 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:57.885 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:57.885 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:57.885 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:57.885 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:57.885 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:57.885 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:57.885 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:57.885 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:57.885 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:57.885 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:57.885 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:57.885 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:57.885 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:57.885 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:57.885 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:57.885 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:57.885 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:57.885 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:57.885 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:57.885 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.885 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:57.885 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:57.885 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:11:57.885 00:54:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:59.358 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:59.358 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.358 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:59.358 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:59.359 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:59.359 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:59.359 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:11:59.359 00:11:59.359 --- 10.0.0.2 ping statistics --- 00:11:59.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.359 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:59.359 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:59.359 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:11:59.359 00:11:59.359 --- 10.0.0.1 ping statistics --- 00:11:59.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.359 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:59.359 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.617 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1768649 00:11:59.617 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:59.617 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1768649 00:11:59.617 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 1768649 ']' 00:11:59.617 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.617 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:59.617 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.617 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:59.617 00:54:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.617 [2024-07-26 00:54:29.836809] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:11:59.617 [2024-07-26 00:54:29.836896] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:59.617 EAL: No free 2048 kB hugepages reported on node 1 00:11:59.617 [2024-07-26 00:54:29.902312] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:59.617 [2024-07-26 00:54:29.993502] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:59.617 [2024-07-26 00:54:29.993564] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:59.617 [2024-07-26 00:54:29.993581] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:59.617 [2024-07-26 00:54:29.993595] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:59.617 [2024-07-26 00:54:29.993607] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:59.617 [2024-07-26 00:54:29.993705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.617 [2024-07-26 00:54:29.993760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:59.617 [2024-07-26 00:54:29.993850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:59.617 [2024-07-26 00:54:29.993853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.875 [2024-07-26 00:54:30.153659] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.875 [2024-07-26 00:54:30.165898] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:59.875 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:00.133 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:00.133 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:00.133 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:00.133 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.133 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.133 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.133 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:00.133 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.133 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.133 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.133 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:00.133 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.133 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.133 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.133 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:00.133 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.133 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:00.133 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.133 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.133 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:00.133 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:00.133 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:00.133 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:00.133 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:00.133 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:00.133 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:00.133 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:00.133 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:00.133 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:00.133 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.133 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.133 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.133 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:00.133 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.133 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.133 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.392 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:00.392 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:00.392 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:00.392 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.392 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:00.392 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.392 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:00.392 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.392 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:00.392 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:00.392 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:00.392 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:00.392 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:00.392 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:00.392 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:00.392 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:00.392 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:00.392 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:00.392 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:00.392 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:00.392 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:00.392 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:00.392 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:00.649 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:00.649 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:00.649 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:00.649 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:00.649 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:00.649 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:00.649 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:00.649 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:00.649 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.649 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.649 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.649 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:00.649 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:00.649 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:00.649 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.649 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:00.649 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.649 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:00.649 00:54:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.649 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:00.649 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:00.649 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:00.649 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:00.650 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:00.650 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:00.650 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:00.650 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:00.908 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:00.908 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:00.908 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:00.908 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:00.908 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:00.908 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:00.908 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:00.908 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:00.908 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:00.908 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:00.908 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:00.908 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:00.908 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:01.166 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:01.166 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:01.166 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.166 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.166 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.166 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:01.166 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:01.166 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.166 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.166 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.166 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:01.166 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:01.166 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:01.166 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:01.166 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:01.166 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:01.166 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:01.166 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:01.166 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:01.166 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:01.166 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:01.166 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:01.166 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:12:01.166 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:01.166 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:12:01.166 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:01.166 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:01.166 rmmod nvme_tcp 00:12:01.166 rmmod nvme_fabrics 00:12:01.166 rmmod nvme_keyring 00:12:01.166 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:01.166 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:12:01.167 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:12:01.167 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1768649 ']' 00:12:01.167 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1768649 00:12:01.167 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 1768649 ']' 00:12:01.167 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 1768649 00:12:01.167 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:12:01.167 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:01.167 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1768649 00:12:01.426 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:01.426 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:01.426 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1768649' 00:12:01.426 killing process with pid 1768649 00:12:01.426 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 1768649 00:12:01.426 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 1768649 00:12:01.426 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:01.426 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:01.426 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:01.426 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:01.426 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:01.426 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.426 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:01.426 00:54:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:03.963 00:12:03.963 real 0m6.136s 00:12:03.963 user 0m8.494s 00:12:03.963 sys 0m2.005s 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.963 ************************************ 00:12:03.963 END TEST nvmf_referrals 00:12:03.963 ************************************ 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:03.963 ************************************ 00:12:03.963 START TEST nvmf_connect_disconnect 00:12:03.963 ************************************ 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:03.963 * Looking for test storage... 00:12:03.963 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.963 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:03.964 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.964 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:03.964 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:03.964 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:12:03.964 00:54:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:05.867 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:05.867 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:05.867 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:05.868 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:05.868 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:05.868 00:54:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:05.868 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:05.868 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:05.868 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:05.868 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:05.868 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:12:05.868 00:12:05.868 --- 10.0.0.2 ping statistics --- 00:12:05.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.868 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:12:05.868 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:05.868 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:05.868 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:12:05.868 00:12:05.868 --- 10.0.0.1 ping statistics --- 00:12:05.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.868 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:12:05.868 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:05.868 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:12:05.868 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:05.868 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:05.868 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:05.868 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:05.868 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:05.868 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:05.868 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:05.868 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:05.868 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:05.868 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:05.868 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:05.868 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1770930 00:12:05.868 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1770930 00:12:05.868 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 1770930 ']' 00:12:05.868 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.868 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:05.868 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:05.868 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.868 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:05.868 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:05.868 [2024-07-26 00:54:36.125739] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:12:05.868 [2024-07-26 00:54:36.125819] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.868 EAL: No free 2048 kB hugepages reported on node 1 00:12:05.868 [2024-07-26 00:54:36.194269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:05.868 [2024-07-26 00:54:36.289306] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.868 [2024-07-26 00:54:36.289370] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.868 [2024-07-26 00:54:36.289387] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:05.868 [2024-07-26 00:54:36.289401] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:05.868 [2024-07-26 00:54:36.289413] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.868 [2024-07-26 00:54:36.289491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.868 [2024-07-26 00:54:36.289548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:05.868 [2024-07-26 00:54:36.289614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:05.868 [2024-07-26 00:54:36.289617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.127 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:06.127 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:06.127 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:06.127 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:06.127 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:06.127 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:06.127 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:06.127 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.127 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:06.127 [2024-07-26 00:54:36.441516] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:06.127 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.127 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:06.127 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.127 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:06.127 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.127 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:06.127 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:06.127 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.127 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:06.127 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.127 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:06.127 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.127 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:06.127 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.127 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:06.127 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.127 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:06.127 [2024-07-26 00:54:36.492453] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:06.127 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.127 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:06.127 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:06.127 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:06.127 00:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:08.667 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.571 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.104 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.634 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.626 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.065 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.599 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.503 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.038 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.576 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.009 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.443 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.009 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.962 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.868 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.911 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.881 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.225 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.320 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.225 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.288 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.730 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.702 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.136 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.705 [2024-07-26 00:56:36.775891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8ec20 is same with the state(5) to be set 00:14:06.705 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.667 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.104 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.637 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.542 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.513 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.579 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.552 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.491 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.021 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.461 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.998 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.568 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.471 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.447 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.980 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.418 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.949 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.853 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.386 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:22.320 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.387 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.289 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:31.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:36.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.762 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:52.766 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.206 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.206 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:15:57.206 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:15:57.206 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:57.207 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:15:57.207 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:57.207 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:15:57.207 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:57.207 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:57.207 rmmod nvme_tcp 00:15:57.207 rmmod nvme_fabrics 00:15:57.207 rmmod nvme_keyring 00:15:57.207 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:57.207 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:15:57.207 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:15:57.207 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1770930 ']' 00:15:57.207 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1770930 00:15:57.207 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1770930 ']' 00:15:57.207 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 1770930 00:15:57.207 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:15:57.207 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:57.207 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1770930 00:15:57.207 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:57.207 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:57.207 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1770930' 00:15:57.207 killing process with pid 1770930 00:15:57.207 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 1770930 00:15:57.207 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 1770930 00:15:57.465 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:57.465 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:57.465 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:57.465 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:57.465 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:57.465 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.465 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:57.465 00:58:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.371 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:59.371 00:15:59.371 real 3m55.868s 00:15:59.371 user 14m57.885s 00:15:59.371 sys 0m34.984s 00:15:59.371 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:59.371 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:59.371 ************************************ 00:15:59.371 END TEST nvmf_connect_disconnect 00:15:59.371 ************************************ 00:15:59.630 00:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:59.630 00:58:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:59.630 00:58:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:59.630 00:58:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:59.630 ************************************ 00:15:59.630 START TEST nvmf_multitarget 00:15:59.630 ************************************ 00:15:59.630 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:59.630 * Looking for test storage... 00:15:59.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:59.630 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:59.630 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:15:59.630 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:59.630 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:59.630 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:59.630 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:59.630 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:59.630 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:59.630 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:59.630 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:59.630 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:59.631 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:59.631 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:59.631 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:59.631 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:59.631 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:59.631 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:59.631 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:59.631 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:59.631 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:59.631 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:59.631 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:59.631 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.631 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.631 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.631 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:15:59.631 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.631 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:15:59.631 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:59.631 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:59.631 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:59.631 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:59.631 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:59.631 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:59.631 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:59.631 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:59.631 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:59.631 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:15:59.631 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:59.631 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:59.631 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:59.631 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:59.631 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:59.631 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.631 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:59.631 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.631 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:59.631 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:59.631 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:15:59.631 00:58:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:01.534 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:01.534 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:01.534 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:01.535 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:01.535 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:01.535 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:01.535 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:01.535 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:01.535 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:01.535 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:01.535 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:01.535 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:01.535 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:01.535 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:01.535 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:01.535 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:01.535 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:01.535 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:01.535 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:01.535 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:01.535 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:01.535 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:16:01.535 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:01.535 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:01.535 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:01.535 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:01.535 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:01.535 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:01.535 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:01.535 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:01.535 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:01.535 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:01.535 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:01.535 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:01.535 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:01.535 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:01.535 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:01.535 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:01.535 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:01.794 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:01.794 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:01.794 00:58:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:01.794 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:01.794 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:01.794 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:01.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:01.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:16:01.794 00:16:01.794 --- 10.0.0.2 ping statistics --- 00:16:01.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.794 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:16:01.794 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:01.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:01.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:16:01.794 00:16:01.794 --- 10.0.0.1 ping statistics --- 00:16:01.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.794 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:16:01.794 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:01.794 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:16:01.794 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:01.794 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:01.794 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:01.794 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:01.794 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:01.794 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:01.794 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:01.794 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:01.794 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:01.794 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:01.794 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:01.794 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1801888 00:16:01.794 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1801888 00:16:01.794 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 1801888 ']' 00:16:01.794 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:01.794 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.794 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:01.794 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.794 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:01.794 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:01.794 [2024-07-26 00:58:32.104235] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:16:01.794 [2024-07-26 00:58:32.104325] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:01.794 EAL: No free 2048 kB hugepages reported on node 1 00:16:01.794 [2024-07-26 00:58:32.175753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:02.053 [2024-07-26 00:58:32.272385] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:02.053 [2024-07-26 00:58:32.272447] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:02.053 [2024-07-26 00:58:32.272463] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:02.053 [2024-07-26 00:58:32.272477] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:02.053 [2024-07-26 00:58:32.272488] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:02.053 [2024-07-26 00:58:32.272544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:02.053 [2024-07-26 00:58:32.272579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:02.053 [2024-07-26 00:58:32.272623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:02.053 [2024-07-26 00:58:32.272625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.053 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:02.053 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:16:02.053 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:02.053 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:02.053 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:02.053 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:02.053 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:02.053 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:02.053 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:02.311 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:02.311 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:02.311 "nvmf_tgt_1" 00:16:02.311 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:02.311 "nvmf_tgt_2" 00:16:02.568 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:02.568 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:02.568 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:02.568 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:02.568 true 00:16:02.568 00:58:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:02.826 true 00:16:02.826 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:02.826 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:02.826 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:02.826 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:02.826 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:02.826 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:02.826 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:16:02.826 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:02.826 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:16:02.826 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:02.826 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:02.826 rmmod nvme_tcp 00:16:02.826 rmmod nvme_fabrics 00:16:02.826 rmmod nvme_keyring 00:16:02.826 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:02.826 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:16:02.826 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:16:02.826 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1801888 ']' 00:16:02.826 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1801888 00:16:02.826 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 1801888 ']' 00:16:02.826 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 1801888 00:16:02.826 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:16:02.826 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:02.826 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1801888 00:16:03.085 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:03.085 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:03.085 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1801888' 00:16:03.085 killing process with pid 1801888 00:16:03.085 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 1801888 00:16:03.085 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 1801888 00:16:03.085 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:03.085 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:03.085 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:03.085 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:03.085 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:03.085 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.085 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:03.085 00:58:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:05.616 00:16:05.616 real 0m5.687s 00:16:05.616 user 0m6.310s 00:16:05.616 sys 0m1.869s 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:05.616 ************************************ 00:16:05.616 END TEST nvmf_multitarget 00:16:05.616 ************************************ 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:05.616 ************************************ 00:16:05.616 START TEST nvmf_rpc 00:16:05.616 ************************************ 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:05.616 * Looking for test storage... 00:16:05.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:05.616 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:05.617 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:16:05.617 00:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:07.519 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:07.519 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:07.519 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:07.519 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:07.519 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:07.520 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:07.520 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:07.520 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:16:07.520 00:16:07.520 --- 10.0.0.2 ping statistics --- 00:16:07.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.520 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:16:07.520 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:07.520 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:07.520 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:16:07.520 00:16:07.520 --- 10.0.0.1 ping statistics --- 00:16:07.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.520 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:16:07.520 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:07.520 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:16:07.520 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:07.520 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:07.520 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:07.520 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:07.520 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:07.520 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:07.520 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:07.520 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:07.520 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:07.520 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:07.520 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:07.520 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1803981 00:16:07.520 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:07.520 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1803981 00:16:07.520 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 1803981 ']' 00:16:07.520 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.520 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:07.520 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.520 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:07.520 00:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:07.520 [2024-07-26 00:58:37.814103] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:16:07.520 [2024-07-26 00:58:37.814194] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:07.520 EAL: No free 2048 kB hugepages reported on node 1 00:16:07.520 [2024-07-26 00:58:37.890314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:07.778 [2024-07-26 00:58:37.985852] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:07.778 [2024-07-26 00:58:37.985908] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:07.778 [2024-07-26 00:58:37.985934] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:07.778 [2024-07-26 00:58:37.985958] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:07.778 [2024-07-26 00:58:37.985978] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:07.778 [2024-07-26 00:58:37.986093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:07.778 [2024-07-26 00:58:37.986133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:07.778 [2024-07-26 00:58:37.986552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:07.778 [2024-07-26 00:58:37.986559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.778 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:07.778 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:16:07.778 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:07.778 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:07.778 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:07.778 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:07.778 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:07.778 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.778 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:07.778 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.778 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:07.778 "tick_rate": 2700000000, 00:16:07.778 "poll_groups": [ 00:16:07.778 { 00:16:07.778 "name": "nvmf_tgt_poll_group_000", 00:16:07.778 "admin_qpairs": 0, 00:16:07.778 "io_qpairs": 0, 00:16:07.778 "current_admin_qpairs": 0, 00:16:07.778 "current_io_qpairs": 0, 00:16:07.778 "pending_bdev_io": 0, 00:16:07.778 "completed_nvme_io": 0, 00:16:07.778 "transports": [] 00:16:07.778 }, 00:16:07.778 { 00:16:07.778 "name": "nvmf_tgt_poll_group_001", 00:16:07.778 "admin_qpairs": 0, 00:16:07.778 "io_qpairs": 0, 00:16:07.778 "current_admin_qpairs": 0, 00:16:07.778 "current_io_qpairs": 0, 00:16:07.778 "pending_bdev_io": 0, 00:16:07.778 "completed_nvme_io": 0, 00:16:07.778 "transports": [] 00:16:07.778 }, 00:16:07.778 { 00:16:07.778 "name": "nvmf_tgt_poll_group_002", 00:16:07.778 "admin_qpairs": 0, 00:16:07.778 "io_qpairs": 0, 00:16:07.778 "current_admin_qpairs": 0, 00:16:07.778 "current_io_qpairs": 0, 00:16:07.778 "pending_bdev_io": 0, 00:16:07.778 "completed_nvme_io": 0, 00:16:07.778 "transports": [] 00:16:07.778 }, 00:16:07.778 { 00:16:07.778 "name": "nvmf_tgt_poll_group_003", 00:16:07.778 "admin_qpairs": 0, 00:16:07.778 "io_qpairs": 0, 00:16:07.778 "current_admin_qpairs": 0, 00:16:07.778 "current_io_qpairs": 0, 00:16:07.778 "pending_bdev_io": 0, 00:16:07.778 "completed_nvme_io": 0, 00:16:07.778 "transports": [] 00:16:07.778 } 00:16:07.778 ] 00:16:07.778 }' 00:16:07.778 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:07.778 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:07.778 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:07.778 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:07.778 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:07.778 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:08.036 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:08.036 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:08.036 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.036 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.036 [2024-07-26 00:58:38.226010] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:08.036 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.036 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:08.036 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.036 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.036 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.036 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:08.036 "tick_rate": 2700000000, 00:16:08.036 "poll_groups": [ 00:16:08.036 { 00:16:08.036 "name": "nvmf_tgt_poll_group_000", 00:16:08.036 "admin_qpairs": 0, 00:16:08.036 "io_qpairs": 0, 00:16:08.036 "current_admin_qpairs": 0, 00:16:08.036 "current_io_qpairs": 0, 00:16:08.036 "pending_bdev_io": 0, 00:16:08.036 "completed_nvme_io": 0, 00:16:08.036 "transports": [ 00:16:08.036 { 00:16:08.036 "trtype": "TCP" 00:16:08.036 } 00:16:08.036 ] 00:16:08.036 }, 00:16:08.036 { 00:16:08.036 "name": "nvmf_tgt_poll_group_001", 00:16:08.036 "admin_qpairs": 0, 00:16:08.036 "io_qpairs": 0, 00:16:08.036 "current_admin_qpairs": 0, 00:16:08.036 "current_io_qpairs": 0, 00:16:08.036 "pending_bdev_io": 0, 00:16:08.036 "completed_nvme_io": 0, 00:16:08.036 "transports": [ 00:16:08.036 { 00:16:08.036 "trtype": "TCP" 00:16:08.036 } 00:16:08.036 ] 00:16:08.036 }, 00:16:08.036 { 00:16:08.036 "name": "nvmf_tgt_poll_group_002", 00:16:08.036 "admin_qpairs": 0, 00:16:08.036 "io_qpairs": 0, 00:16:08.036 "current_admin_qpairs": 0, 00:16:08.036 "current_io_qpairs": 0, 00:16:08.036 "pending_bdev_io": 0, 00:16:08.036 "completed_nvme_io": 0, 00:16:08.036 "transports": [ 00:16:08.036 { 00:16:08.036 "trtype": "TCP" 00:16:08.036 } 00:16:08.036 ] 00:16:08.036 }, 00:16:08.036 { 00:16:08.036 "name": "nvmf_tgt_poll_group_003", 00:16:08.036 "admin_qpairs": 0, 00:16:08.037 "io_qpairs": 0, 00:16:08.037 "current_admin_qpairs": 0, 00:16:08.037 "current_io_qpairs": 0, 00:16:08.037 "pending_bdev_io": 0, 00:16:08.037 "completed_nvme_io": 0, 00:16:08.037 "transports": [ 00:16:08.037 { 00:16:08.037 "trtype": "TCP" 00:16:08.037 } 00:16:08.037 ] 00:16:08.037 } 00:16:08.037 ] 00:16:08.037 }' 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.037 Malloc1 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.037 [2024-07-26 00:58:38.379502] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:08.037 [2024-07-26 00:58:38.401956] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:08.037 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:08.037 could not add new controller: failed to write to nvme-fabrics device 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.037 00:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:08.969 00:58:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:08.969 00:58:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:08.969 00:58:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:08.969 00:58:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:08.969 00:58:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:10.865 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:10.865 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:10.865 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:10.865 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:10.865 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:10.865 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:10.865 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:10.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.865 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:10.865 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:10.865 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:10.865 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:10.865 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:10.865 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:10.865 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:10.865 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:10.865 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.865 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:10.865 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.865 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:10.865 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:10.865 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:10.865 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:10.865 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:10.865 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:10.865 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:10.865 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:10.865 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:10.865 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:10.865 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:10.865 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:10.865 [2024-07-26 00:58:41.212326] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:10.865 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:10.865 could not add new controller: failed to write to nvme-fabrics device 00:16:10.865 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:10.865 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:10.865 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:10.865 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:10.865 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:10.865 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.865 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:10.865 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.865 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:11.459 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:11.459 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:11.459 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:11.459 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:11.459 00:58:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:13.996 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:13.996 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:13.996 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:13.996 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:13.996 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:13.996 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:13.996 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:13.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.996 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:13.996 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:13.996 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:13.996 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:13.996 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:13.996 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:13.996 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:13.996 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:13.996 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.996 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.996 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.996 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:13.996 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:13.996 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:13.996 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.996 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.996 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.996 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:13.996 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.996 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.996 [2024-07-26 00:58:43.979628] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:13.996 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.996 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:13.996 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.996 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.996 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.996 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:13.996 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.996 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.996 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.996 00:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:14.255 00:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:14.255 00:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:14.255 00:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:14.255 00:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:14.255 00:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:16.796 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:16.797 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:16.797 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:16.797 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:16.797 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:16.797 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:16.797 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:16.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:16.797 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:16.797 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:16.797 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:16.797 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:16.797 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:16.797 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:16.797 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:16.797 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:16.797 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.797 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.797 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.797 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:16.797 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.797 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.797 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.797 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:16.797 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:16.797 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.797 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.797 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.797 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:16.797 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.797 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.797 [2024-07-26 00:58:46.824682] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:16.797 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.797 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:16.797 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.797 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.797 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.797 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:16.797 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.797 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.797 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.797 00:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:17.057 00:58:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:17.057 00:58:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:17.057 00:58:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:17.057 00:58:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:17.057 00:58:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:19.600 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:19.600 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:19.600 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:19.600 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:19.600 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:19.600 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:19.600 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:19.600 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.600 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:19.600 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:19.600 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:19.600 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:19.600 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:19.600 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:19.600 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:19.600 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:19.600 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.600 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.601 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.601 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:19.601 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.601 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.601 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.601 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:19.601 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:19.601 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.601 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.601 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.601 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:19.601 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.601 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.601 [2024-07-26 00:58:49.566664] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:19.601 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.601 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:19.601 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.601 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.601 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.601 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:19.601 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.601 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.601 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.601 00:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:19.860 00:58:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:19.860 00:58:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:19.860 00:58:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:19.860 00:58:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:19.860 00:58:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:21.763 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:21.763 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:21.763 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:21.763 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:21.763 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:21.763 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:21.763 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:22.022 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.022 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:22.022 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:22.022 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:22.022 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:22.022 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:22.022 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:22.022 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:22.022 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:22.022 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.022 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.022 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.022 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:22.022 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.022 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.022 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.022 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:22.022 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:22.022 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.022 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.022 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.022 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:22.022 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.022 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.022 [2024-07-26 00:58:52.321630] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:22.022 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.022 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:22.022 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.022 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.022 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.022 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:22.022 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.022 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.022 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.022 00:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:22.588 00:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:22.588 00:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:22.588 00:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:22.588 00:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:22.588 00:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:25.129 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:25.129 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:25.129 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:25.129 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:25.129 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:25.129 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:25.129 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:25.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.130 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:25.130 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:25.130 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:25.130 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:25.130 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:25.130 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:25.130 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:25.130 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:25.130 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.130 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.130 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.130 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:25.130 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.130 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.130 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.130 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:25.130 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:25.130 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.130 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.130 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.130 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:25.130 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.130 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.130 [2024-07-26 00:58:55.202714] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:25.130 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.130 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:25.130 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.130 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.130 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.130 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:25.130 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.130 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.130 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.130 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:25.697 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:25.697 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:25.697 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:25.697 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:25.697 00:58:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:27.604 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:27.604 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:27.604 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:27.604 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:27.604 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:27.604 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:27.604 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:27.604 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.604 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:27.604 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:27.604 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:27.604 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:27.604 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:27.604 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:27.604 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:27.605 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:27.605 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.605 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.605 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.605 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:27.605 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.605 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.605 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.605 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:16:27.605 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:27.605 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:27.605 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.605 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.605 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.605 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:27.605 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.605 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.605 [2024-07-26 00:58:57.986464] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:27.605 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.605 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:27.605 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.605 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.605 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.605 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:27.605 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.605 00:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.605 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.605 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:27.605 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.605 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.605 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.605 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:27.605 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.605 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.605 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.605 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:27.605 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:27.605 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.605 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.605 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.605 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:27.605 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.605 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.864 [2024-07-26 00:58:58.034528] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:27.864 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.864 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:27.864 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.864 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.864 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.864 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:27.864 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.864 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.864 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.864 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:27.864 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.864 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.864 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.864 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:27.864 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.864 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.864 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.864 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:27.864 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:27.864 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.864 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.864 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.864 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:27.864 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.864 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.864 [2024-07-26 00:58:58.082683] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:27.864 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.864 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:27.864 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.864 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.864 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.864 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:27.864 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.864 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.864 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.864 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.865 [2024-07-26 00:58:58.130854] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.865 [2024-07-26 00:58:58.179012] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:16:27.865 "tick_rate": 2700000000, 00:16:27.865 "poll_groups": [ 00:16:27.865 { 00:16:27.865 "name": "nvmf_tgt_poll_group_000", 00:16:27.865 "admin_qpairs": 2, 00:16:27.865 "io_qpairs": 84, 00:16:27.865 "current_admin_qpairs": 0, 00:16:27.865 "current_io_qpairs": 0, 00:16:27.865 "pending_bdev_io": 0, 00:16:27.865 "completed_nvme_io": 133, 00:16:27.865 "transports": [ 00:16:27.865 { 00:16:27.865 "trtype": "TCP" 00:16:27.865 } 00:16:27.865 ] 00:16:27.865 }, 00:16:27.865 { 00:16:27.865 "name": "nvmf_tgt_poll_group_001", 00:16:27.865 "admin_qpairs": 2, 00:16:27.865 "io_qpairs": 84, 00:16:27.865 "current_admin_qpairs": 0, 00:16:27.865 "current_io_qpairs": 0, 00:16:27.865 "pending_bdev_io": 0, 00:16:27.865 "completed_nvme_io": 124, 00:16:27.865 "transports": [ 00:16:27.865 { 00:16:27.865 "trtype": "TCP" 00:16:27.865 } 00:16:27.865 ] 00:16:27.865 }, 00:16:27.865 { 00:16:27.865 "name": "nvmf_tgt_poll_group_002", 00:16:27.865 "admin_qpairs": 1, 00:16:27.865 "io_qpairs": 84, 00:16:27.865 "current_admin_qpairs": 0, 00:16:27.865 "current_io_qpairs": 0, 00:16:27.865 "pending_bdev_io": 0, 00:16:27.865 "completed_nvme_io": 232, 00:16:27.865 "transports": [ 00:16:27.865 { 00:16:27.865 "trtype": "TCP" 00:16:27.865 } 00:16:27.865 ] 00:16:27.865 }, 00:16:27.865 { 00:16:27.865 "name": "nvmf_tgt_poll_group_003", 00:16:27.865 "admin_qpairs": 2, 00:16:27.865 "io_qpairs": 84, 00:16:27.865 "current_admin_qpairs": 0, 00:16:27.865 "current_io_qpairs": 0, 00:16:27.865 "pending_bdev_io": 0, 00:16:27.865 "completed_nvme_io": 197, 00:16:27.865 "transports": [ 00:16:27.865 { 00:16:27.865 "trtype": "TCP" 00:16:27.865 } 00:16:27.865 ] 00:16:27.865 } 00:16:27.865 ] 00:16:27.865 }' 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:27.865 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:28.123 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:16:28.123 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:28.123 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:28.123 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:16:28.123 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:28.123 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:16:28.123 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:28.123 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:16:28.123 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:28.123 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:28.123 rmmod nvme_tcp 00:16:28.123 rmmod nvme_fabrics 00:16:28.123 rmmod nvme_keyring 00:16:28.123 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:28.123 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:16:28.123 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:16:28.123 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1803981 ']' 00:16:28.123 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1803981 00:16:28.123 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 1803981 ']' 00:16:28.123 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 1803981 00:16:28.123 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:16:28.123 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:28.123 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1803981 00:16:28.123 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:28.123 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:28.123 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1803981' 00:16:28.123 killing process with pid 1803981 00:16:28.123 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 1803981 00:16:28.123 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 1803981 00:16:28.382 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:28.382 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:28.382 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:28.382 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:28.382 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:28.382 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.382 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:28.382 00:58:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.293 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:30.293 00:16:30.293 real 0m25.084s 00:16:30.293 user 1m21.637s 00:16:30.293 sys 0m4.088s 00:16:30.293 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:30.293 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.293 ************************************ 00:16:30.293 END TEST nvmf_rpc 00:16:30.293 ************************************ 00:16:30.293 00:59:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:30.293 00:59:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:30.293 00:59:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:30.293 00:59:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:30.555 ************************************ 00:16:30.555 START TEST nvmf_invalid 00:16:30.555 ************************************ 00:16:30.555 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:30.555 * Looking for test storage... 00:16:30.555 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:30.555 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:30.555 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:16:30.555 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:30.555 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:30.555 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:30.555 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:30.555 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:30.555 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:30.555 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:30.555 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:30.555 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:30.555 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:30.555 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:30.555 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:30.555 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:30.555 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:30.555 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:30.555 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:30.555 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:30.555 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:30.555 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:30.555 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:30.555 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.555 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.555 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.555 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:16:30.555 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.555 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:16:30.555 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:30.555 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:30.555 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:30.555 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:30.556 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:30.556 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:30.556 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:30.556 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:30.556 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:30.556 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:30.556 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:30.556 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:16:30.556 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:16:30.556 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:16:30.556 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:30.556 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:30.556 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:30.556 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:30.556 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:30.556 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.556 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:30.556 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.556 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:30.556 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:30.556 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:16:30.556 00:59:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:32.459 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:32.459 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:32.459 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:32.459 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:32.459 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:32.460 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:32.460 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:32.460 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:32.460 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:32.460 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:32.460 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:32.460 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:32.460 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:32.460 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:32.460 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:32.460 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:32.460 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:32.460 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:16:32.460 00:16:32.460 --- 10.0.0.2 ping statistics --- 00:16:32.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:32.460 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:16:32.460 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:32.460 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:32.460 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:16:32.460 00:16:32.460 --- 10.0.0.1 ping statistics --- 00:16:32.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:32.460 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:16:32.460 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:32.460 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:16:32.460 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:32.460 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:32.460 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:32.460 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:32.460 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:32.460 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:32.460 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:32.460 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:16:32.460 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:32.460 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:32.460 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:32.460 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1808470 00:16:32.460 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:32.460 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1808470 00:16:32.460 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 1808470 ']' 00:16:32.460 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.460 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:32.460 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.460 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:32.460 00:59:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:32.718 [2024-07-26 00:59:02.929650] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:16:32.718 [2024-07-26 00:59:02.929722] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:32.718 EAL: No free 2048 kB hugepages reported on node 1 00:16:32.718 [2024-07-26 00:59:02.993650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:32.718 [2024-07-26 00:59:03.081825] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:32.718 [2024-07-26 00:59:03.081897] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:32.718 [2024-07-26 00:59:03.081918] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:32.718 [2024-07-26 00:59:03.081934] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:32.718 [2024-07-26 00:59:03.081949] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:32.718 [2024-07-26 00:59:03.082152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:32.718 [2024-07-26 00:59:03.082199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:32.718 [2024-07-26 00:59:03.082226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:32.718 [2024-07-26 00:59:03.082232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.977 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:32.977 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:16:32.977 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:32.977 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:32.977 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:32.977 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:32.977 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:32.977 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode10007 00:16:33.235 [2024-07-26 00:59:03.446835] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:16:33.235 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:16:33.235 { 00:16:33.235 "nqn": "nqn.2016-06.io.spdk:cnode10007", 00:16:33.235 "tgt_name": "foobar", 00:16:33.235 "method": "nvmf_create_subsystem", 00:16:33.235 "req_id": 1 00:16:33.235 } 00:16:33.235 Got JSON-RPC error response 00:16:33.235 response: 00:16:33.235 { 00:16:33.235 "code": -32603, 00:16:33.235 "message": "Unable to find target foobar" 00:16:33.235 }' 00:16:33.235 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:16:33.235 { 00:16:33.235 "nqn": "nqn.2016-06.io.spdk:cnode10007", 00:16:33.235 "tgt_name": "foobar", 00:16:33.235 "method": "nvmf_create_subsystem", 00:16:33.235 "req_id": 1 00:16:33.235 } 00:16:33.235 Got JSON-RPC error response 00:16:33.235 response: 00:16:33.235 { 00:16:33.235 "code": -32603, 00:16:33.235 "message": "Unable to find target foobar" 00:16:33.235 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:16:33.235 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:16:33.235 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode14276 00:16:33.493 [2024-07-26 00:59:03.691651] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14276: invalid serial number 'SPDKISFASTANDAWESOME' 00:16:33.493 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:16:33.493 { 00:16:33.493 "nqn": "nqn.2016-06.io.spdk:cnode14276", 00:16:33.493 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:33.493 "method": "nvmf_create_subsystem", 00:16:33.493 "req_id": 1 00:16:33.493 } 00:16:33.493 Got JSON-RPC error response 00:16:33.493 response: 00:16:33.493 { 00:16:33.493 "code": -32602, 00:16:33.493 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:33.493 }' 00:16:33.493 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:16:33.493 { 00:16:33.493 "nqn": "nqn.2016-06.io.spdk:cnode14276", 00:16:33.493 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:33.493 "method": "nvmf_create_subsystem", 00:16:33.493 "req_id": 1 00:16:33.493 } 00:16:33.493 Got JSON-RPC error response 00:16:33.493 response: 00:16:33.493 { 00:16:33.493 "code": -32602, 00:16:33.493 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:33.493 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:33.493 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:16:33.493 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode20637 00:16:33.752 [2024-07-26 00:59:03.940470] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20637: invalid model number 'SPDK_Controller' 00:16:33.752 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:16:33.752 { 00:16:33.752 "nqn": "nqn.2016-06.io.spdk:cnode20637", 00:16:33.752 "model_number": "SPDK_Controller\u001f", 00:16:33.752 "method": "nvmf_create_subsystem", 00:16:33.752 "req_id": 1 00:16:33.752 } 00:16:33.752 Got JSON-RPC error response 00:16:33.752 response: 00:16:33.752 { 00:16:33.752 "code": -32602, 00:16:33.752 "message": "Invalid MN SPDK_Controller\u001f" 00:16:33.752 }' 00:16:33.752 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:16:33.752 { 00:16:33.752 "nqn": "nqn.2016-06.io.spdk:cnode20637", 00:16:33.752 "model_number": "SPDK_Controller\u001f", 00:16:33.752 "method": "nvmf_create_subsystem", 00:16:33.752 "req_id": 1 00:16:33.752 } 00:16:33.752 Got JSON-RPC error response 00:16:33.752 response: 00:16:33.752 { 00:16:33.752 "code": -32602, 00:16:33.752 "message": "Invalid MN SPDK_Controller\u001f" 00:16:33.752 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:33.752 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:16:33.752 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:16:33.752 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:33.752 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:33.752 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:33.752 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:33.752 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:33.752 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:16:33.752 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:16:33.752 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:16:33.752 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:33.753 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:33.753 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:16:33.753 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:16:33.753 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:16:33.753 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:33.753 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:33.753 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:16:33.753 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:16:33.753 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:16:33.753 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:33.753 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:33.753 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:16:33.753 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:16:33.753 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:16:33.753 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:33.753 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:33.753 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:16:33.753 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:16:33.753 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:16:33.753 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:33.753 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:33.753 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:16:33.753 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:16:33.753 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:16:33.753 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:33.753 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:33.753 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:16:33.753 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:16:33.753 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:16:33.753 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:33.753 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:33.753 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:16:33.753 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:16:33.753 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:16:33.753 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:33.753 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:33.753 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:16:33.753 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:16:33.753 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:16:33.753 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:33.753 00:59:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ a == \- ]] 00:16:33.753 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'aS*FZ%i%t!xaWF*M*c3x_' 00:16:33.754 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'aS*FZ%i%t!xaWF*M*c3x_' nqn.2016-06.io.spdk:cnode22359 00:16:34.022 [2024-07-26 00:59:04.285659] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22359: invalid serial number 'aS*FZ%i%t!xaWF*M*c3x_' 00:16:34.022 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:16:34.022 { 00:16:34.022 "nqn": "nqn.2016-06.io.spdk:cnode22359", 00:16:34.022 "serial_number": "aS*FZ%i%t!xaWF*M*c3x_", 00:16:34.022 "method": "nvmf_create_subsystem", 00:16:34.022 "req_id": 1 00:16:34.022 } 00:16:34.022 Got JSON-RPC error response 00:16:34.022 response: 00:16:34.022 { 00:16:34.022 "code": -32602, 00:16:34.022 "message": "Invalid SN aS*FZ%i%t!xaWF*M*c3x_" 00:16:34.022 }' 00:16:34.022 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:16:34.022 { 00:16:34.022 "nqn": "nqn.2016-06.io.spdk:cnode22359", 00:16:34.022 "serial_number": "aS*FZ%i%t!xaWF*M*c3x_", 00:16:34.022 "method": "nvmf_create_subsystem", 00:16:34.022 "req_id": 1 00:16:34.022 } 00:16:34.022 Got JSON-RPC error response 00:16:34.022 response: 00:16:34.022 { 00:16:34.022 "code": -32602, 00:16:34.022 "message": "Invalid SN aS*FZ%i%t!xaWF*M*c3x_" 00:16:34.022 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:34.022 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:16:34.022 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:16:34.022 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:34.022 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:34.022 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:34.022 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:34.022 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.022 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:16:34.022 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:16:34.022 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:16:34.022 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.022 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:16:34.023 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.024 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.024 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:16:34.024 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:16:34.024 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:16:34.024 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.024 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.024 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:16:34.024 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:16:34.024 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:16:34.024 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.024 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.024 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:16:34.024 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:16:34.024 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:16:34.024 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:34.024 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:34.024 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ t == \- ]] 00:16:34.024 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'tBaeS>Z#\]sG'\''|URH.uf`"4A4.Dae{6>iV5oNv;x+' 00:16:34.024 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'tBaeS>Z#\]sG'\''|URH.uf`"4A4.Dae{6>iV5oNv;x+' nqn.2016-06.io.spdk:cnode13032 00:16:34.317 [2024-07-26 00:59:04.658911] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13032: invalid model number 'tBaeS>Z#\]sG'|URH.uf`"4A4.Dae{6>iV5oNv;x+' 00:16:34.317 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:16:34.317 { 00:16:34.317 "nqn": "nqn.2016-06.io.spdk:cnode13032", 00:16:34.317 "model_number": "tBaeS>Z#\\]sG'\''|URH.uf`\"4A4.Dae{6>iV5oNv;x+", 00:16:34.317 "method": "nvmf_create_subsystem", 00:16:34.317 "req_id": 1 00:16:34.317 } 00:16:34.317 Got JSON-RPC error response 00:16:34.317 response: 00:16:34.317 { 00:16:34.317 "code": -32602, 00:16:34.317 "message": "Invalid MN tBaeS>Z#\\]sG'\''|URH.uf`\"4A4.Dae{6>iV5oNv;x+" 00:16:34.317 }' 00:16:34.317 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:16:34.317 { 00:16:34.317 "nqn": "nqn.2016-06.io.spdk:cnode13032", 00:16:34.317 "model_number": "tBaeS>Z#\\]sG'|URH.uf`\"4A4.Dae{6>iV5oNv;x+", 00:16:34.317 "method": "nvmf_create_subsystem", 00:16:34.317 "req_id": 1 00:16:34.317 } 00:16:34.317 Got JSON-RPC error response 00:16:34.317 response: 00:16:34.317 { 00:16:34.317 "code": -32602, 00:16:34.317 "message": "Invalid MN tBaeS>Z#\\]sG'|URH.uf`\"4A4.Dae{6>iV5oNv;x+" 00:16:34.317 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:34.317 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:16:34.574 [2024-07-26 00:59:04.899792] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:34.574 00:59:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:16:34.832 00:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:16:34.833 00:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:16:34.833 00:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:16:34.833 00:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:16:34.833 00:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:16:35.091 [2024-07-26 00:59:05.421537] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:16:35.091 00:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:16:35.091 { 00:16:35.091 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:35.091 "listen_address": { 00:16:35.091 "trtype": "tcp", 00:16:35.091 "traddr": "", 00:16:35.091 "trsvcid": "4421" 00:16:35.091 }, 00:16:35.091 "method": "nvmf_subsystem_remove_listener", 00:16:35.091 "req_id": 1 00:16:35.091 } 00:16:35.091 Got JSON-RPC error response 00:16:35.091 response: 00:16:35.091 { 00:16:35.091 "code": -32602, 00:16:35.091 "message": "Invalid parameters" 00:16:35.091 }' 00:16:35.091 00:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:16:35.091 { 00:16:35.091 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:35.091 "listen_address": { 00:16:35.091 "trtype": "tcp", 00:16:35.091 "traddr": "", 00:16:35.091 "trsvcid": "4421" 00:16:35.091 }, 00:16:35.091 "method": "nvmf_subsystem_remove_listener", 00:16:35.091 "req_id": 1 00:16:35.091 } 00:16:35.091 Got JSON-RPC error response 00:16:35.091 response: 00:16:35.091 { 00:16:35.091 "code": -32602, 00:16:35.091 "message": "Invalid parameters" 00:16:35.091 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:16:35.091 00:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16841 -i 0 00:16:35.349 [2024-07-26 00:59:05.670294] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16841: invalid cntlid range [0-65519] 00:16:35.349 00:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:16:35.349 { 00:16:35.349 "nqn": "nqn.2016-06.io.spdk:cnode16841", 00:16:35.349 "min_cntlid": 0, 00:16:35.349 "method": "nvmf_create_subsystem", 00:16:35.349 "req_id": 1 00:16:35.349 } 00:16:35.349 Got JSON-RPC error response 00:16:35.349 response: 00:16:35.349 { 00:16:35.349 "code": -32602, 00:16:35.349 "message": "Invalid cntlid range [0-65519]" 00:16:35.349 }' 00:16:35.349 00:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:16:35.349 { 00:16:35.349 "nqn": "nqn.2016-06.io.spdk:cnode16841", 00:16:35.349 "min_cntlid": 0, 00:16:35.349 "method": "nvmf_create_subsystem", 00:16:35.349 "req_id": 1 00:16:35.349 } 00:16:35.349 Got JSON-RPC error response 00:16:35.349 response: 00:16:35.349 { 00:16:35.349 "code": -32602, 00:16:35.349 "message": "Invalid cntlid range [0-65519]" 00:16:35.349 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:35.349 00:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31408 -i 65520 00:16:35.607 [2024-07-26 00:59:05.923117] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31408: invalid cntlid range [65520-65519] 00:16:35.607 00:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:16:35.607 { 00:16:35.607 "nqn": "nqn.2016-06.io.spdk:cnode31408", 00:16:35.607 "min_cntlid": 65520, 00:16:35.607 "method": "nvmf_create_subsystem", 00:16:35.607 "req_id": 1 00:16:35.607 } 00:16:35.607 Got JSON-RPC error response 00:16:35.607 response: 00:16:35.607 { 00:16:35.607 "code": -32602, 00:16:35.607 "message": "Invalid cntlid range [65520-65519]" 00:16:35.607 }' 00:16:35.607 00:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:16:35.607 { 00:16:35.607 "nqn": "nqn.2016-06.io.spdk:cnode31408", 00:16:35.607 "min_cntlid": 65520, 00:16:35.607 "method": "nvmf_create_subsystem", 00:16:35.607 "req_id": 1 00:16:35.607 } 00:16:35.607 Got JSON-RPC error response 00:16:35.607 response: 00:16:35.607 { 00:16:35.607 "code": -32602, 00:16:35.607 "message": "Invalid cntlid range [65520-65519]" 00:16:35.607 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:35.607 00:59:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8228 -I 0 00:16:35.865 [2024-07-26 00:59:06.167935] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8228: invalid cntlid range [1-0] 00:16:35.865 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:16:35.865 { 00:16:35.865 "nqn": "nqn.2016-06.io.spdk:cnode8228", 00:16:35.865 "max_cntlid": 0, 00:16:35.865 "method": "nvmf_create_subsystem", 00:16:35.865 "req_id": 1 00:16:35.865 } 00:16:35.865 Got JSON-RPC error response 00:16:35.865 response: 00:16:35.865 { 00:16:35.865 "code": -32602, 00:16:35.865 "message": "Invalid cntlid range [1-0]" 00:16:35.865 }' 00:16:35.865 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:16:35.865 { 00:16:35.865 "nqn": "nqn.2016-06.io.spdk:cnode8228", 00:16:35.865 "max_cntlid": 0, 00:16:35.865 "method": "nvmf_create_subsystem", 00:16:35.865 "req_id": 1 00:16:35.865 } 00:16:35.865 Got JSON-RPC error response 00:16:35.865 response: 00:16:35.865 { 00:16:35.865 "code": -32602, 00:16:35.865 "message": "Invalid cntlid range [1-0]" 00:16:35.865 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:35.865 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31410 -I 65520 00:16:36.123 [2024-07-26 00:59:06.424806] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31410: invalid cntlid range [1-65520] 00:16:36.123 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:16:36.123 { 00:16:36.123 "nqn": "nqn.2016-06.io.spdk:cnode31410", 00:16:36.123 "max_cntlid": 65520, 00:16:36.123 "method": "nvmf_create_subsystem", 00:16:36.123 "req_id": 1 00:16:36.123 } 00:16:36.123 Got JSON-RPC error response 00:16:36.123 response: 00:16:36.123 { 00:16:36.123 "code": -32602, 00:16:36.123 "message": "Invalid cntlid range [1-65520]" 00:16:36.123 }' 00:16:36.123 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:16:36.123 { 00:16:36.123 "nqn": "nqn.2016-06.io.spdk:cnode31410", 00:16:36.123 "max_cntlid": 65520, 00:16:36.123 "method": "nvmf_create_subsystem", 00:16:36.123 "req_id": 1 00:16:36.123 } 00:16:36.123 Got JSON-RPC error response 00:16:36.123 response: 00:16:36.123 { 00:16:36.123 "code": -32602, 00:16:36.123 "message": "Invalid cntlid range [1-65520]" 00:16:36.123 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:36.123 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11256 -i 6 -I 5 00:16:36.382 [2024-07-26 00:59:06.673673] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11256: invalid cntlid range [6-5] 00:16:36.382 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:16:36.382 { 00:16:36.382 "nqn": "nqn.2016-06.io.spdk:cnode11256", 00:16:36.382 "min_cntlid": 6, 00:16:36.382 "max_cntlid": 5, 00:16:36.382 "method": "nvmf_create_subsystem", 00:16:36.382 "req_id": 1 00:16:36.382 } 00:16:36.382 Got JSON-RPC error response 00:16:36.382 response: 00:16:36.382 { 00:16:36.382 "code": -32602, 00:16:36.382 "message": "Invalid cntlid range [6-5]" 00:16:36.382 }' 00:16:36.382 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:16:36.382 { 00:16:36.382 "nqn": "nqn.2016-06.io.spdk:cnode11256", 00:16:36.382 "min_cntlid": 6, 00:16:36.382 "max_cntlid": 5, 00:16:36.382 "method": "nvmf_create_subsystem", 00:16:36.382 "req_id": 1 00:16:36.382 } 00:16:36.382 Got JSON-RPC error response 00:16:36.382 response: 00:16:36.382 { 00:16:36.382 "code": -32602, 00:16:36.382 "message": "Invalid cntlid range [6-5]" 00:16:36.382 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:36.382 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:16:36.382 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:16:36.382 { 00:16:36.382 "name": "foobar", 00:16:36.382 "method": "nvmf_delete_target", 00:16:36.382 "req_id": 1 00:16:36.382 } 00:16:36.382 Got JSON-RPC error response 00:16:36.382 response: 00:16:36.382 { 00:16:36.382 "code": -32602, 00:16:36.382 "message": "The specified target doesn'\''t exist, cannot delete it." 00:16:36.382 }' 00:16:36.382 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:16:36.382 { 00:16:36.382 "name": "foobar", 00:16:36.382 "method": "nvmf_delete_target", 00:16:36.382 "req_id": 1 00:16:36.382 } 00:16:36.382 Got JSON-RPC error response 00:16:36.382 response: 00:16:36.382 { 00:16:36.382 "code": -32602, 00:16:36.382 "message": "The specified target doesn't exist, cannot delete it." 00:16:36.382 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:16:36.382 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:16:36.382 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:16:36.382 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:36.382 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:16:36.642 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:36.642 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:16:36.642 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:36.642 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:36.642 rmmod nvme_tcp 00:16:36.642 rmmod nvme_fabrics 00:16:36.642 rmmod nvme_keyring 00:16:36.642 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:36.642 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:16:36.642 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:16:36.642 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1808470 ']' 00:16:36.642 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1808470 00:16:36.642 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 1808470 ']' 00:16:36.642 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 1808470 00:16:36.642 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:16:36.642 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:36.642 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1808470 00:16:36.642 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:36.642 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:36.642 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1808470' 00:16:36.642 killing process with pid 1808470 00:16:36.642 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 1808470 00:16:36.642 00:59:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 1808470 00:16:36.901 00:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:36.901 00:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:36.901 00:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:36.901 00:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:36.901 00:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:36.901 00:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.901 00:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:36.901 00:59:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.807 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:38.807 00:16:38.807 real 0m8.441s 00:16:38.807 user 0m19.642s 00:16:38.807 sys 0m2.341s 00:16:38.807 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:38.807 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:38.807 ************************************ 00:16:38.807 END TEST nvmf_invalid 00:16:38.807 ************************************ 00:16:38.807 00:59:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:38.807 00:59:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:38.807 00:59:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:38.807 00:59:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:38.807 ************************************ 00:16:38.807 START TEST nvmf_connect_stress 00:16:38.807 ************************************ 00:16:38.807 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:39.067 * Looking for test storage... 00:16:39.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:39.067 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:39.067 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:16:39.067 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:39.067 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:39.067 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:39.067 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:39.067 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:39.067 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:39.067 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:39.067 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:39.067 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:39.067 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:39.067 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:39.067 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:39.067 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:39.067 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:39.067 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:39.067 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:39.067 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:39.068 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:39.068 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:39.068 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:39.068 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.068 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.068 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.068 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:16:39.068 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.068 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:16:39.068 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:39.068 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:39.068 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:39.068 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:39.068 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:39.068 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:39.068 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:39.068 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:39.068 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:39.068 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:39.068 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:39.068 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:39.068 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:39.068 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:39.068 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.068 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:39.068 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.068 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:39.068 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:39.068 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:16:39.068 00:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:40.973 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:40.973 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:40.973 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:40.973 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:16:40.973 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:40.974 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:40.974 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:40.974 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:40.974 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:40.974 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:40.974 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:40.974 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:40.974 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:40.974 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:40.974 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:40.974 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:40.974 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:40.974 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:40.974 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:40.974 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:40.974 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:40.974 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:40.974 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:40.974 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:40.974 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:40.974 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:40.974 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:40.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:40.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:16:40.974 00:16:40.974 --- 10.0.0.2 ping statistics --- 00:16:40.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.974 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:16:40.974 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:41.238 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:41.238 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:16:41.238 00:16:41.238 --- 10.0.0.1 ping statistics --- 00:16:41.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.238 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:16:41.238 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:41.238 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:16:41.238 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:41.238 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:41.238 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:41.238 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:41.238 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:41.238 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:41.238 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:41.238 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:41.238 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:41.238 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:41.238 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:41.238 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1811097 00:16:41.238 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:41.238 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1811097 00:16:41.238 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 1811097 ']' 00:16:41.238 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.238 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:41.238 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.238 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:41.238 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:41.238 [2024-07-26 00:59:11.479236] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:16:41.238 [2024-07-26 00:59:11.479312] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:41.238 EAL: No free 2048 kB hugepages reported on node 1 00:16:41.238 [2024-07-26 00:59:11.549029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:41.238 [2024-07-26 00:59:11.643166] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:41.238 [2024-07-26 00:59:11.643223] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:41.238 [2024-07-26 00:59:11.643240] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:41.238 [2024-07-26 00:59:11.643254] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:41.238 [2024-07-26 00:59:11.643266] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:41.238 [2024-07-26 00:59:11.643354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:41.238 [2024-07-26 00:59:11.643411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:41.238 [2024-07-26 00:59:11.643414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:41.499 [2024-07-26 00:59:11.798664] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:41.499 [2024-07-26 00:59:11.823227] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:41.499 NULL1 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1811126 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:41.499 EAL: No free 2048 kB hugepages reported on node 1 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811126 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.499 00:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:42.066 00:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.067 00:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811126 00:16:42.067 00:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:42.067 00:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.067 00:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:42.325 00:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.325 00:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811126 00:16:42.325 00:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:42.325 00:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.325 00:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:42.585 00:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.585 00:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811126 00:16:42.585 00:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:42.585 00:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.585 00:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:42.845 00:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.845 00:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811126 00:16:42.845 00:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:42.845 00:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.845 00:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:43.105 00:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.105 00:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811126 00:16:43.105 00:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:43.105 00:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.105 00:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:43.671 00:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.671 00:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811126 00:16:43.671 00:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:43.671 00:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.671 00:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:43.929 00:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.929 00:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811126 00:16:43.929 00:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:43.929 00:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.929 00:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:44.188 00:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.188 00:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811126 00:16:44.188 00:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:44.188 00:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.188 00:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:44.447 00:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.447 00:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811126 00:16:44.447 00:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:44.447 00:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.447 00:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:44.705 00:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.705 00:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811126 00:16:44.705 00:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:44.705 00:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.705 00:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:45.271 00:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.272 00:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811126 00:16:45.272 00:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:45.272 00:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.272 00:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:45.531 00:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.531 00:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811126 00:16:45.531 00:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:45.531 00:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.531 00:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:45.790 00:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.790 00:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811126 00:16:45.790 00:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:45.790 00:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.790 00:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:46.048 00:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.048 00:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811126 00:16:46.048 00:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:46.048 00:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.048 00:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:46.306 00:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.306 00:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811126 00:16:46.306 00:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:46.306 00:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.306 00:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:46.873 00:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.873 00:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811126 00:16:46.873 00:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:46.873 00:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.873 00:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.132 00:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.132 00:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811126 00:16:47.132 00:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:47.132 00:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.132 00:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.391 00:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.391 00:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811126 00:16:47.391 00:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:47.392 00:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.392 00:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.649 00:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.649 00:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811126 00:16:47.649 00:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:47.649 00:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.649 00:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.906 00:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.906 00:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811126 00:16:47.906 00:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:47.906 00:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.906 00:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:48.474 00:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.474 00:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811126 00:16:48.474 00:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:48.474 00:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.474 00:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:48.733 00:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.733 00:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811126 00:16:48.733 00:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:48.733 00:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.733 00:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:48.993 00:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.993 00:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811126 00:16:48.993 00:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:48.993 00:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.993 00:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.251 00:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.251 00:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811126 00:16:49.251 00:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.251 00:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.251 00:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.509 00:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.509 00:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811126 00:16:49.509 00:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.509 00:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.509 00:59:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.079 00:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.079 00:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811126 00:16:50.079 00:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.079 00:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.079 00:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.338 00:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.338 00:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811126 00:16:50.339 00:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.339 00:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.339 00:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.596 00:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.596 00:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811126 00:16:50.596 00:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.596 00:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.596 00:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.854 00:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.854 00:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811126 00:16:50.854 00:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.854 00:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.854 00:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.114 00:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.114 00:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811126 00:16:51.114 00:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.114 00:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.114 00:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.713 00:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.713 00:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811126 00:16:51.713 00:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.713 00:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.713 00:59:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.713 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:51.973 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.973 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811126 00:16:51.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1811126) - No such process 00:16:51.973 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1811126 00:16:51.973 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:51.973 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:51.973 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:51.973 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:51.973 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:16:51.973 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:51.973 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:16:51.973 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:51.973 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:51.973 rmmod nvme_tcp 00:16:51.973 rmmod nvme_fabrics 00:16:51.973 rmmod nvme_keyring 00:16:51.973 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:51.973 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:16:51.973 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:16:51.973 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1811097 ']' 00:16:51.973 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1811097 00:16:51.973 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 1811097 ']' 00:16:51.973 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 1811097 00:16:51.973 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:16:51.973 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:51.973 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1811097 00:16:51.973 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:51.973 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:51.973 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1811097' 00:16:51.973 killing process with pid 1811097 00:16:51.973 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 1811097 00:16:51.973 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 1811097 00:16:52.232 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:52.232 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:52.232 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:52.232 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:52.232 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:52.232 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:52.232 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:52.232 00:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:54.134 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:54.134 00:16:54.134 real 0m15.323s 00:16:54.134 user 0m38.328s 00:16:54.134 sys 0m5.970s 00:16:54.134 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:54.134 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:54.134 ************************************ 00:16:54.134 END TEST nvmf_connect_stress 00:16:54.134 ************************************ 00:16:54.134 00:59:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:54.134 00:59:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:54.134 00:59:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:54.134 00:59:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:54.392 ************************************ 00:16:54.392 START TEST nvmf_fused_ordering 00:16:54.392 ************************************ 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:54.392 * Looking for test storage... 00:16:54.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.392 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:54.393 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:54.393 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:54.393 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:54.393 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:16:54.393 00:59:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:56.294 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:56.294 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:56.294 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:56.294 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:56.294 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:56.295 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:56.295 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:56.295 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:56.295 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:56.295 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:56.295 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:56.295 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:56.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:56.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:16:56.295 00:16:56.295 --- 10.0.0.2 ping statistics --- 00:16:56.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.295 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:16:56.295 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:56.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:56.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:16:56.295 00:16:56.295 --- 10.0.0.1 ping statistics --- 00:16:56.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.295 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:16:56.295 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:56.295 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:16:56.295 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:56.295 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:56.295 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:56.295 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:56.295 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:56.295 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:56.295 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:56.295 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:16:56.295 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:56.295 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:56.295 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:56.295 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1814274 00:16:56.295 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:56.295 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1814274 00:16:56.295 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 1814274 ']' 00:16:56.295 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.295 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:56.295 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.295 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:56.295 00:59:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:56.555 [2024-07-26 00:59:26.754976] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:16:56.555 [2024-07-26 00:59:26.755068] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.555 EAL: No free 2048 kB hugepages reported on node 1 00:16:56.555 [2024-07-26 00:59:26.827544] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.555 [2024-07-26 00:59:26.926096] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:56.555 [2024-07-26 00:59:26.926170] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:56.555 [2024-07-26 00:59:26.926187] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:56.555 [2024-07-26 00:59:26.926201] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:56.555 [2024-07-26 00:59:26.926213] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:56.555 [2024-07-26 00:59:26.926243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.814 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:56.814 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:16:56.814 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:56.814 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:56.814 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:56.814 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:56.814 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:56.814 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.814 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:56.814 [2024-07-26 00:59:27.073975] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:56.814 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.814 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:56.814 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.814 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:56.814 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.814 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:56.814 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.814 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:56.814 [2024-07-26 00:59:27.090220] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:56.814 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.814 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:56.814 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.814 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:56.814 NULL1 00:16:56.814 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.814 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:16:56.814 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.814 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:56.814 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.814 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:56.814 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.814 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:56.815 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.815 00:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:56.815 [2024-07-26 00:59:27.134961] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:16:56.815 [2024-07-26 00:59:27.135004] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1814403 ] 00:16:56.815 EAL: No free 2048 kB hugepages reported on node 1 00:16:57.383 Attached to nqn.2016-06.io.spdk:cnode1 00:16:57.383 Namespace ID: 1 size: 1GB 00:16:57.383 fused_ordering(0) 00:16:57.383 fused_ordering(1) 00:16:57.383 fused_ordering(2) 00:16:57.383 fused_ordering(3) 00:16:57.383 fused_ordering(4) 00:16:57.383 fused_ordering(5) 00:16:57.383 fused_ordering(6) 00:16:57.383 fused_ordering(7) 00:16:57.383 fused_ordering(8) 00:16:57.383 fused_ordering(9) 00:16:57.383 fused_ordering(10) 00:16:57.383 fused_ordering(11) 00:16:57.383 fused_ordering(12) 00:16:57.383 fused_ordering(13) 00:16:57.383 fused_ordering(14) 00:16:57.383 fused_ordering(15) 00:16:57.383 fused_ordering(16) 00:16:57.383 fused_ordering(17) 00:16:57.383 fused_ordering(18) 00:16:57.383 fused_ordering(19) 00:16:57.383 fused_ordering(20) 00:16:57.383 fused_ordering(21) 00:16:57.383 fused_ordering(22) 00:16:57.383 fused_ordering(23) 00:16:57.383 fused_ordering(24) 00:16:57.383 fused_ordering(25) 00:16:57.383 fused_ordering(26) 00:16:57.383 fused_ordering(27) 00:16:57.383 fused_ordering(28) 00:16:57.383 fused_ordering(29) 00:16:57.383 fused_ordering(30) 00:16:57.383 fused_ordering(31) 00:16:57.383 fused_ordering(32) 00:16:57.383 fused_ordering(33) 00:16:57.383 fused_ordering(34) 00:16:57.383 fused_ordering(35) 00:16:57.383 fused_ordering(36) 00:16:57.383 fused_ordering(37) 00:16:57.383 fused_ordering(38) 00:16:57.383 fused_ordering(39) 00:16:57.383 fused_ordering(40) 00:16:57.383 fused_ordering(41) 00:16:57.383 fused_ordering(42) 00:16:57.383 fused_ordering(43) 00:16:57.383 fused_ordering(44) 00:16:57.383 fused_ordering(45) 00:16:57.383 fused_ordering(46) 00:16:57.383 fused_ordering(47) 00:16:57.383 fused_ordering(48) 00:16:57.383 fused_ordering(49) 00:16:57.383 fused_ordering(50) 00:16:57.383 fused_ordering(51) 00:16:57.383 fused_ordering(52) 00:16:57.383 fused_ordering(53) 00:16:57.383 fused_ordering(54) 00:16:57.383 fused_ordering(55) 00:16:57.383 fused_ordering(56) 00:16:57.383 fused_ordering(57) 00:16:57.383 fused_ordering(58) 00:16:57.383 fused_ordering(59) 00:16:57.383 fused_ordering(60) 00:16:57.383 fused_ordering(61) 00:16:57.383 fused_ordering(62) 00:16:57.383 fused_ordering(63) 00:16:57.383 fused_ordering(64) 00:16:57.383 fused_ordering(65) 00:16:57.383 fused_ordering(66) 00:16:57.383 fused_ordering(67) 00:16:57.383 fused_ordering(68) 00:16:57.383 fused_ordering(69) 00:16:57.383 fused_ordering(70) 00:16:57.383 fused_ordering(71) 00:16:57.383 fused_ordering(72) 00:16:57.383 fused_ordering(73) 00:16:57.383 fused_ordering(74) 00:16:57.383 fused_ordering(75) 00:16:57.383 fused_ordering(76) 00:16:57.383 fused_ordering(77) 00:16:57.383 fused_ordering(78) 00:16:57.383 fused_ordering(79) 00:16:57.383 fused_ordering(80) 00:16:57.383 fused_ordering(81) 00:16:57.383 fused_ordering(82) 00:16:57.383 fused_ordering(83) 00:16:57.383 fused_ordering(84) 00:16:57.383 fused_ordering(85) 00:16:57.383 fused_ordering(86) 00:16:57.383 fused_ordering(87) 00:16:57.383 fused_ordering(88) 00:16:57.383 fused_ordering(89) 00:16:57.383 fused_ordering(90) 00:16:57.383 fused_ordering(91) 00:16:57.383 fused_ordering(92) 00:16:57.383 fused_ordering(93) 00:16:57.383 fused_ordering(94) 00:16:57.383 fused_ordering(95) 00:16:57.383 fused_ordering(96) 00:16:57.383 fused_ordering(97) 00:16:57.383 fused_ordering(98) 00:16:57.383 fused_ordering(99) 00:16:57.383 fused_ordering(100) 00:16:57.383 fused_ordering(101) 00:16:57.383 fused_ordering(102) 00:16:57.383 fused_ordering(103) 00:16:57.383 fused_ordering(104) 00:16:57.383 fused_ordering(105) 00:16:57.383 fused_ordering(106) 00:16:57.383 fused_ordering(107) 00:16:57.383 fused_ordering(108) 00:16:57.383 fused_ordering(109) 00:16:57.383 fused_ordering(110) 00:16:57.383 fused_ordering(111) 00:16:57.383 fused_ordering(112) 00:16:57.383 fused_ordering(113) 00:16:57.383 fused_ordering(114) 00:16:57.383 fused_ordering(115) 00:16:57.383 fused_ordering(116) 00:16:57.383 fused_ordering(117) 00:16:57.383 fused_ordering(118) 00:16:57.383 fused_ordering(119) 00:16:57.383 fused_ordering(120) 00:16:57.383 fused_ordering(121) 00:16:57.383 fused_ordering(122) 00:16:57.383 fused_ordering(123) 00:16:57.383 fused_ordering(124) 00:16:57.383 fused_ordering(125) 00:16:57.383 fused_ordering(126) 00:16:57.383 fused_ordering(127) 00:16:57.383 fused_ordering(128) 00:16:57.383 fused_ordering(129) 00:16:57.383 fused_ordering(130) 00:16:57.383 fused_ordering(131) 00:16:57.383 fused_ordering(132) 00:16:57.383 fused_ordering(133) 00:16:57.383 fused_ordering(134) 00:16:57.383 fused_ordering(135) 00:16:57.383 fused_ordering(136) 00:16:57.383 fused_ordering(137) 00:16:57.383 fused_ordering(138) 00:16:57.383 fused_ordering(139) 00:16:57.383 fused_ordering(140) 00:16:57.383 fused_ordering(141) 00:16:57.383 fused_ordering(142) 00:16:57.383 fused_ordering(143) 00:16:57.384 fused_ordering(144) 00:16:57.384 fused_ordering(145) 00:16:57.384 fused_ordering(146) 00:16:57.384 fused_ordering(147) 00:16:57.384 fused_ordering(148) 00:16:57.384 fused_ordering(149) 00:16:57.384 fused_ordering(150) 00:16:57.384 fused_ordering(151) 00:16:57.384 fused_ordering(152) 00:16:57.384 fused_ordering(153) 00:16:57.384 fused_ordering(154) 00:16:57.384 fused_ordering(155) 00:16:57.384 fused_ordering(156) 00:16:57.384 fused_ordering(157) 00:16:57.384 fused_ordering(158) 00:16:57.384 fused_ordering(159) 00:16:57.384 fused_ordering(160) 00:16:57.384 fused_ordering(161) 00:16:57.384 fused_ordering(162) 00:16:57.384 fused_ordering(163) 00:16:57.384 fused_ordering(164) 00:16:57.384 fused_ordering(165) 00:16:57.384 fused_ordering(166) 00:16:57.384 fused_ordering(167) 00:16:57.384 fused_ordering(168) 00:16:57.384 fused_ordering(169) 00:16:57.384 fused_ordering(170) 00:16:57.384 fused_ordering(171) 00:16:57.384 fused_ordering(172) 00:16:57.384 fused_ordering(173) 00:16:57.384 fused_ordering(174) 00:16:57.384 fused_ordering(175) 00:16:57.384 fused_ordering(176) 00:16:57.384 fused_ordering(177) 00:16:57.384 fused_ordering(178) 00:16:57.384 fused_ordering(179) 00:16:57.384 fused_ordering(180) 00:16:57.384 fused_ordering(181) 00:16:57.384 fused_ordering(182) 00:16:57.384 fused_ordering(183) 00:16:57.384 fused_ordering(184) 00:16:57.384 fused_ordering(185) 00:16:57.384 fused_ordering(186) 00:16:57.384 fused_ordering(187) 00:16:57.384 fused_ordering(188) 00:16:57.384 fused_ordering(189) 00:16:57.384 fused_ordering(190) 00:16:57.384 fused_ordering(191) 00:16:57.384 fused_ordering(192) 00:16:57.384 fused_ordering(193) 00:16:57.384 fused_ordering(194) 00:16:57.384 fused_ordering(195) 00:16:57.384 fused_ordering(196) 00:16:57.384 fused_ordering(197) 00:16:57.384 fused_ordering(198) 00:16:57.384 fused_ordering(199) 00:16:57.384 fused_ordering(200) 00:16:57.384 fused_ordering(201) 00:16:57.384 fused_ordering(202) 00:16:57.384 fused_ordering(203) 00:16:57.384 fused_ordering(204) 00:16:57.384 fused_ordering(205) 00:16:57.643 fused_ordering(206) 00:16:57.643 fused_ordering(207) 00:16:57.643 fused_ordering(208) 00:16:57.643 fused_ordering(209) 00:16:57.643 fused_ordering(210) 00:16:57.643 fused_ordering(211) 00:16:57.643 fused_ordering(212) 00:16:57.644 fused_ordering(213) 00:16:57.644 fused_ordering(214) 00:16:57.644 fused_ordering(215) 00:16:57.644 fused_ordering(216) 00:16:57.644 fused_ordering(217) 00:16:57.644 fused_ordering(218) 00:16:57.644 fused_ordering(219) 00:16:57.644 fused_ordering(220) 00:16:57.644 fused_ordering(221) 00:16:57.644 fused_ordering(222) 00:16:57.644 fused_ordering(223) 00:16:57.644 fused_ordering(224) 00:16:57.644 fused_ordering(225) 00:16:57.644 fused_ordering(226) 00:16:57.644 fused_ordering(227) 00:16:57.644 fused_ordering(228) 00:16:57.644 fused_ordering(229) 00:16:57.644 fused_ordering(230) 00:16:57.644 fused_ordering(231) 00:16:57.644 fused_ordering(232) 00:16:57.644 fused_ordering(233) 00:16:57.644 fused_ordering(234) 00:16:57.644 fused_ordering(235) 00:16:57.644 fused_ordering(236) 00:16:57.644 fused_ordering(237) 00:16:57.644 fused_ordering(238) 00:16:57.644 fused_ordering(239) 00:16:57.644 fused_ordering(240) 00:16:57.644 fused_ordering(241) 00:16:57.644 fused_ordering(242) 00:16:57.644 fused_ordering(243) 00:16:57.644 fused_ordering(244) 00:16:57.644 fused_ordering(245) 00:16:57.644 fused_ordering(246) 00:16:57.644 fused_ordering(247) 00:16:57.644 fused_ordering(248) 00:16:57.644 fused_ordering(249) 00:16:57.644 fused_ordering(250) 00:16:57.644 fused_ordering(251) 00:16:57.644 fused_ordering(252) 00:16:57.644 fused_ordering(253) 00:16:57.644 fused_ordering(254) 00:16:57.644 fused_ordering(255) 00:16:57.644 fused_ordering(256) 00:16:57.644 fused_ordering(257) 00:16:57.644 fused_ordering(258) 00:16:57.644 fused_ordering(259) 00:16:57.644 fused_ordering(260) 00:16:57.644 fused_ordering(261) 00:16:57.644 fused_ordering(262) 00:16:57.644 fused_ordering(263) 00:16:57.644 fused_ordering(264) 00:16:57.644 fused_ordering(265) 00:16:57.644 fused_ordering(266) 00:16:57.644 fused_ordering(267) 00:16:57.644 fused_ordering(268) 00:16:57.644 fused_ordering(269) 00:16:57.644 fused_ordering(270) 00:16:57.644 fused_ordering(271) 00:16:57.644 fused_ordering(272) 00:16:57.644 fused_ordering(273) 00:16:57.644 fused_ordering(274) 00:16:57.644 fused_ordering(275) 00:16:57.644 fused_ordering(276) 00:16:57.644 fused_ordering(277) 00:16:57.644 fused_ordering(278) 00:16:57.644 fused_ordering(279) 00:16:57.644 fused_ordering(280) 00:16:57.644 fused_ordering(281) 00:16:57.644 fused_ordering(282) 00:16:57.644 fused_ordering(283) 00:16:57.644 fused_ordering(284) 00:16:57.644 fused_ordering(285) 00:16:57.644 fused_ordering(286) 00:16:57.644 fused_ordering(287) 00:16:57.644 fused_ordering(288) 00:16:57.644 fused_ordering(289) 00:16:57.644 fused_ordering(290) 00:16:57.644 fused_ordering(291) 00:16:57.644 fused_ordering(292) 00:16:57.644 fused_ordering(293) 00:16:57.644 fused_ordering(294) 00:16:57.644 fused_ordering(295) 00:16:57.644 fused_ordering(296) 00:16:57.644 fused_ordering(297) 00:16:57.644 fused_ordering(298) 00:16:57.644 fused_ordering(299) 00:16:57.644 fused_ordering(300) 00:16:57.644 fused_ordering(301) 00:16:57.644 fused_ordering(302) 00:16:57.644 fused_ordering(303) 00:16:57.644 fused_ordering(304) 00:16:57.644 fused_ordering(305) 00:16:57.644 fused_ordering(306) 00:16:57.644 fused_ordering(307) 00:16:57.644 fused_ordering(308) 00:16:57.644 fused_ordering(309) 00:16:57.644 fused_ordering(310) 00:16:57.644 fused_ordering(311) 00:16:57.644 fused_ordering(312) 00:16:57.644 fused_ordering(313) 00:16:57.644 fused_ordering(314) 00:16:57.644 fused_ordering(315) 00:16:57.644 fused_ordering(316) 00:16:57.644 fused_ordering(317) 00:16:57.644 fused_ordering(318) 00:16:57.644 fused_ordering(319) 00:16:57.644 fused_ordering(320) 00:16:57.644 fused_ordering(321) 00:16:57.644 fused_ordering(322) 00:16:57.644 fused_ordering(323) 00:16:57.644 fused_ordering(324) 00:16:57.644 fused_ordering(325) 00:16:57.644 fused_ordering(326) 00:16:57.644 fused_ordering(327) 00:16:57.644 fused_ordering(328) 00:16:57.644 fused_ordering(329) 00:16:57.644 fused_ordering(330) 00:16:57.644 fused_ordering(331) 00:16:57.644 fused_ordering(332) 00:16:57.644 fused_ordering(333) 00:16:57.644 fused_ordering(334) 00:16:57.644 fused_ordering(335) 00:16:57.644 fused_ordering(336) 00:16:57.644 fused_ordering(337) 00:16:57.644 fused_ordering(338) 00:16:57.644 fused_ordering(339) 00:16:57.644 fused_ordering(340) 00:16:57.644 fused_ordering(341) 00:16:57.644 fused_ordering(342) 00:16:57.644 fused_ordering(343) 00:16:57.644 fused_ordering(344) 00:16:57.644 fused_ordering(345) 00:16:57.644 fused_ordering(346) 00:16:57.644 fused_ordering(347) 00:16:57.644 fused_ordering(348) 00:16:57.644 fused_ordering(349) 00:16:57.644 fused_ordering(350) 00:16:57.644 fused_ordering(351) 00:16:57.644 fused_ordering(352) 00:16:57.644 fused_ordering(353) 00:16:57.644 fused_ordering(354) 00:16:57.644 fused_ordering(355) 00:16:57.644 fused_ordering(356) 00:16:57.644 fused_ordering(357) 00:16:57.644 fused_ordering(358) 00:16:57.644 fused_ordering(359) 00:16:57.644 fused_ordering(360) 00:16:57.644 fused_ordering(361) 00:16:57.644 fused_ordering(362) 00:16:57.644 fused_ordering(363) 00:16:57.644 fused_ordering(364) 00:16:57.644 fused_ordering(365) 00:16:57.644 fused_ordering(366) 00:16:57.644 fused_ordering(367) 00:16:57.644 fused_ordering(368) 00:16:57.644 fused_ordering(369) 00:16:57.644 fused_ordering(370) 00:16:57.644 fused_ordering(371) 00:16:57.644 fused_ordering(372) 00:16:57.644 fused_ordering(373) 00:16:57.644 fused_ordering(374) 00:16:57.644 fused_ordering(375) 00:16:57.644 fused_ordering(376) 00:16:57.644 fused_ordering(377) 00:16:57.644 fused_ordering(378) 00:16:57.644 fused_ordering(379) 00:16:57.644 fused_ordering(380) 00:16:57.644 fused_ordering(381) 00:16:57.644 fused_ordering(382) 00:16:57.644 fused_ordering(383) 00:16:57.644 fused_ordering(384) 00:16:57.644 fused_ordering(385) 00:16:57.644 fused_ordering(386) 00:16:57.644 fused_ordering(387) 00:16:57.644 fused_ordering(388) 00:16:57.644 fused_ordering(389) 00:16:57.644 fused_ordering(390) 00:16:57.644 fused_ordering(391) 00:16:57.644 fused_ordering(392) 00:16:57.644 fused_ordering(393) 00:16:57.644 fused_ordering(394) 00:16:57.644 fused_ordering(395) 00:16:57.644 fused_ordering(396) 00:16:57.644 fused_ordering(397) 00:16:57.644 fused_ordering(398) 00:16:57.644 fused_ordering(399) 00:16:57.644 fused_ordering(400) 00:16:57.644 fused_ordering(401) 00:16:57.644 fused_ordering(402) 00:16:57.644 fused_ordering(403) 00:16:57.644 fused_ordering(404) 00:16:57.644 fused_ordering(405) 00:16:57.644 fused_ordering(406) 00:16:57.644 fused_ordering(407) 00:16:57.644 fused_ordering(408) 00:16:57.644 fused_ordering(409) 00:16:57.644 fused_ordering(410) 00:16:58.213 fused_ordering(411) 00:16:58.213 fused_ordering(412) 00:16:58.213 fused_ordering(413) 00:16:58.213 fused_ordering(414) 00:16:58.213 fused_ordering(415) 00:16:58.213 fused_ordering(416) 00:16:58.213 fused_ordering(417) 00:16:58.213 fused_ordering(418) 00:16:58.213 fused_ordering(419) 00:16:58.213 fused_ordering(420) 00:16:58.213 fused_ordering(421) 00:16:58.213 fused_ordering(422) 00:16:58.213 fused_ordering(423) 00:16:58.213 fused_ordering(424) 00:16:58.213 fused_ordering(425) 00:16:58.213 fused_ordering(426) 00:16:58.213 fused_ordering(427) 00:16:58.213 fused_ordering(428) 00:16:58.213 fused_ordering(429) 00:16:58.213 fused_ordering(430) 00:16:58.213 fused_ordering(431) 00:16:58.213 fused_ordering(432) 00:16:58.213 fused_ordering(433) 00:16:58.213 fused_ordering(434) 00:16:58.213 fused_ordering(435) 00:16:58.213 fused_ordering(436) 00:16:58.213 fused_ordering(437) 00:16:58.213 fused_ordering(438) 00:16:58.213 fused_ordering(439) 00:16:58.213 fused_ordering(440) 00:16:58.213 fused_ordering(441) 00:16:58.213 fused_ordering(442) 00:16:58.213 fused_ordering(443) 00:16:58.213 fused_ordering(444) 00:16:58.213 fused_ordering(445) 00:16:58.213 fused_ordering(446) 00:16:58.213 fused_ordering(447) 00:16:58.213 fused_ordering(448) 00:16:58.213 fused_ordering(449) 00:16:58.213 fused_ordering(450) 00:16:58.213 fused_ordering(451) 00:16:58.214 fused_ordering(452) 00:16:58.214 fused_ordering(453) 00:16:58.214 fused_ordering(454) 00:16:58.214 fused_ordering(455) 00:16:58.214 fused_ordering(456) 00:16:58.214 fused_ordering(457) 00:16:58.214 fused_ordering(458) 00:16:58.214 fused_ordering(459) 00:16:58.214 fused_ordering(460) 00:16:58.214 fused_ordering(461) 00:16:58.214 fused_ordering(462) 00:16:58.214 fused_ordering(463) 00:16:58.214 fused_ordering(464) 00:16:58.214 fused_ordering(465) 00:16:58.214 fused_ordering(466) 00:16:58.214 fused_ordering(467) 00:16:58.214 fused_ordering(468) 00:16:58.214 fused_ordering(469) 00:16:58.214 fused_ordering(470) 00:16:58.214 fused_ordering(471) 00:16:58.214 fused_ordering(472) 00:16:58.214 fused_ordering(473) 00:16:58.214 fused_ordering(474) 00:16:58.214 fused_ordering(475) 00:16:58.214 fused_ordering(476) 00:16:58.214 fused_ordering(477) 00:16:58.214 fused_ordering(478) 00:16:58.214 fused_ordering(479) 00:16:58.214 fused_ordering(480) 00:16:58.214 fused_ordering(481) 00:16:58.214 fused_ordering(482) 00:16:58.214 fused_ordering(483) 00:16:58.214 fused_ordering(484) 00:16:58.214 fused_ordering(485) 00:16:58.214 fused_ordering(486) 00:16:58.214 fused_ordering(487) 00:16:58.214 fused_ordering(488) 00:16:58.214 fused_ordering(489) 00:16:58.214 fused_ordering(490) 00:16:58.214 fused_ordering(491) 00:16:58.214 fused_ordering(492) 00:16:58.214 fused_ordering(493) 00:16:58.214 fused_ordering(494) 00:16:58.214 fused_ordering(495) 00:16:58.214 fused_ordering(496) 00:16:58.214 fused_ordering(497) 00:16:58.214 fused_ordering(498) 00:16:58.214 fused_ordering(499) 00:16:58.214 fused_ordering(500) 00:16:58.214 fused_ordering(501) 00:16:58.214 fused_ordering(502) 00:16:58.214 fused_ordering(503) 00:16:58.214 fused_ordering(504) 00:16:58.214 fused_ordering(505) 00:16:58.214 fused_ordering(506) 00:16:58.214 fused_ordering(507) 00:16:58.214 fused_ordering(508) 00:16:58.214 fused_ordering(509) 00:16:58.214 fused_ordering(510) 00:16:58.214 fused_ordering(511) 00:16:58.214 fused_ordering(512) 00:16:58.214 fused_ordering(513) 00:16:58.214 fused_ordering(514) 00:16:58.214 fused_ordering(515) 00:16:58.214 fused_ordering(516) 00:16:58.214 fused_ordering(517) 00:16:58.214 fused_ordering(518) 00:16:58.214 fused_ordering(519) 00:16:58.214 fused_ordering(520) 00:16:58.214 fused_ordering(521) 00:16:58.214 fused_ordering(522) 00:16:58.214 fused_ordering(523) 00:16:58.214 fused_ordering(524) 00:16:58.214 fused_ordering(525) 00:16:58.214 fused_ordering(526) 00:16:58.214 fused_ordering(527) 00:16:58.214 fused_ordering(528) 00:16:58.214 fused_ordering(529) 00:16:58.214 fused_ordering(530) 00:16:58.214 fused_ordering(531) 00:16:58.214 fused_ordering(532) 00:16:58.214 fused_ordering(533) 00:16:58.214 fused_ordering(534) 00:16:58.214 fused_ordering(535) 00:16:58.214 fused_ordering(536) 00:16:58.214 fused_ordering(537) 00:16:58.214 fused_ordering(538) 00:16:58.214 fused_ordering(539) 00:16:58.214 fused_ordering(540) 00:16:58.214 fused_ordering(541) 00:16:58.214 fused_ordering(542) 00:16:58.214 fused_ordering(543) 00:16:58.214 fused_ordering(544) 00:16:58.214 fused_ordering(545) 00:16:58.214 fused_ordering(546) 00:16:58.214 fused_ordering(547) 00:16:58.214 fused_ordering(548) 00:16:58.214 fused_ordering(549) 00:16:58.214 fused_ordering(550) 00:16:58.214 fused_ordering(551) 00:16:58.214 fused_ordering(552) 00:16:58.214 fused_ordering(553) 00:16:58.214 fused_ordering(554) 00:16:58.214 fused_ordering(555) 00:16:58.214 fused_ordering(556) 00:16:58.214 fused_ordering(557) 00:16:58.214 fused_ordering(558) 00:16:58.214 fused_ordering(559) 00:16:58.214 fused_ordering(560) 00:16:58.214 fused_ordering(561) 00:16:58.214 fused_ordering(562) 00:16:58.214 fused_ordering(563) 00:16:58.214 fused_ordering(564) 00:16:58.214 fused_ordering(565) 00:16:58.214 fused_ordering(566) 00:16:58.214 fused_ordering(567) 00:16:58.214 fused_ordering(568) 00:16:58.214 fused_ordering(569) 00:16:58.214 fused_ordering(570) 00:16:58.214 fused_ordering(571) 00:16:58.214 fused_ordering(572) 00:16:58.214 fused_ordering(573) 00:16:58.214 fused_ordering(574) 00:16:58.214 fused_ordering(575) 00:16:58.214 fused_ordering(576) 00:16:58.214 fused_ordering(577) 00:16:58.214 fused_ordering(578) 00:16:58.214 fused_ordering(579) 00:16:58.214 fused_ordering(580) 00:16:58.214 fused_ordering(581) 00:16:58.214 fused_ordering(582) 00:16:58.214 fused_ordering(583) 00:16:58.214 fused_ordering(584) 00:16:58.214 fused_ordering(585) 00:16:58.214 fused_ordering(586) 00:16:58.214 fused_ordering(587) 00:16:58.214 fused_ordering(588) 00:16:58.214 fused_ordering(589) 00:16:58.214 fused_ordering(590) 00:16:58.214 fused_ordering(591) 00:16:58.214 fused_ordering(592) 00:16:58.214 fused_ordering(593) 00:16:58.214 fused_ordering(594) 00:16:58.214 fused_ordering(595) 00:16:58.214 fused_ordering(596) 00:16:58.214 fused_ordering(597) 00:16:58.214 fused_ordering(598) 00:16:58.214 fused_ordering(599) 00:16:58.214 fused_ordering(600) 00:16:58.214 fused_ordering(601) 00:16:58.214 fused_ordering(602) 00:16:58.214 fused_ordering(603) 00:16:58.214 fused_ordering(604) 00:16:58.214 fused_ordering(605) 00:16:58.214 fused_ordering(606) 00:16:58.214 fused_ordering(607) 00:16:58.214 fused_ordering(608) 00:16:58.214 fused_ordering(609) 00:16:58.214 fused_ordering(610) 00:16:58.214 fused_ordering(611) 00:16:58.214 fused_ordering(612) 00:16:58.214 fused_ordering(613) 00:16:58.214 fused_ordering(614) 00:16:58.214 fused_ordering(615) 00:16:58.780 fused_ordering(616) 00:16:58.780 fused_ordering(617) 00:16:58.780 fused_ordering(618) 00:16:58.780 fused_ordering(619) 00:16:58.780 fused_ordering(620) 00:16:58.780 fused_ordering(621) 00:16:58.780 fused_ordering(622) 00:16:58.780 fused_ordering(623) 00:16:58.780 fused_ordering(624) 00:16:58.780 fused_ordering(625) 00:16:58.780 fused_ordering(626) 00:16:58.780 fused_ordering(627) 00:16:58.780 fused_ordering(628) 00:16:58.780 fused_ordering(629) 00:16:58.780 fused_ordering(630) 00:16:58.780 fused_ordering(631) 00:16:58.780 fused_ordering(632) 00:16:58.780 fused_ordering(633) 00:16:58.780 fused_ordering(634) 00:16:58.780 fused_ordering(635) 00:16:58.780 fused_ordering(636) 00:16:58.780 fused_ordering(637) 00:16:58.780 fused_ordering(638) 00:16:58.780 fused_ordering(639) 00:16:58.780 fused_ordering(640) 00:16:58.780 fused_ordering(641) 00:16:58.780 fused_ordering(642) 00:16:58.780 fused_ordering(643) 00:16:58.780 fused_ordering(644) 00:16:58.780 fused_ordering(645) 00:16:58.780 fused_ordering(646) 00:16:58.780 fused_ordering(647) 00:16:58.780 fused_ordering(648) 00:16:58.780 fused_ordering(649) 00:16:58.780 fused_ordering(650) 00:16:58.780 fused_ordering(651) 00:16:58.780 fused_ordering(652) 00:16:58.780 fused_ordering(653) 00:16:58.780 fused_ordering(654) 00:16:58.780 fused_ordering(655) 00:16:58.780 fused_ordering(656) 00:16:58.780 fused_ordering(657) 00:16:58.780 fused_ordering(658) 00:16:58.780 fused_ordering(659) 00:16:58.780 fused_ordering(660) 00:16:58.780 fused_ordering(661) 00:16:58.780 fused_ordering(662) 00:16:58.780 fused_ordering(663) 00:16:58.780 fused_ordering(664) 00:16:58.780 fused_ordering(665) 00:16:58.780 fused_ordering(666) 00:16:58.780 fused_ordering(667) 00:16:58.780 fused_ordering(668) 00:16:58.780 fused_ordering(669) 00:16:58.780 fused_ordering(670) 00:16:58.780 fused_ordering(671) 00:16:58.780 fused_ordering(672) 00:16:58.780 fused_ordering(673) 00:16:58.780 fused_ordering(674) 00:16:58.780 fused_ordering(675) 00:16:58.780 fused_ordering(676) 00:16:58.780 fused_ordering(677) 00:16:58.780 fused_ordering(678) 00:16:58.780 fused_ordering(679) 00:16:58.780 fused_ordering(680) 00:16:58.780 fused_ordering(681) 00:16:58.780 fused_ordering(682) 00:16:58.780 fused_ordering(683) 00:16:58.780 fused_ordering(684) 00:16:58.780 fused_ordering(685) 00:16:58.780 fused_ordering(686) 00:16:58.780 fused_ordering(687) 00:16:58.780 fused_ordering(688) 00:16:58.780 fused_ordering(689) 00:16:58.780 fused_ordering(690) 00:16:58.780 fused_ordering(691) 00:16:58.780 fused_ordering(692) 00:16:58.780 fused_ordering(693) 00:16:58.780 fused_ordering(694) 00:16:58.780 fused_ordering(695) 00:16:58.780 fused_ordering(696) 00:16:58.780 fused_ordering(697) 00:16:58.780 fused_ordering(698) 00:16:58.780 fused_ordering(699) 00:16:58.780 fused_ordering(700) 00:16:58.780 fused_ordering(701) 00:16:58.780 fused_ordering(702) 00:16:58.780 fused_ordering(703) 00:16:58.780 fused_ordering(704) 00:16:58.780 fused_ordering(705) 00:16:58.780 fused_ordering(706) 00:16:58.780 fused_ordering(707) 00:16:58.780 fused_ordering(708) 00:16:58.780 fused_ordering(709) 00:16:58.780 fused_ordering(710) 00:16:58.780 fused_ordering(711) 00:16:58.780 fused_ordering(712) 00:16:58.780 fused_ordering(713) 00:16:58.780 fused_ordering(714) 00:16:58.780 fused_ordering(715) 00:16:58.780 fused_ordering(716) 00:16:58.780 fused_ordering(717) 00:16:58.780 fused_ordering(718) 00:16:58.780 fused_ordering(719) 00:16:58.780 fused_ordering(720) 00:16:58.780 fused_ordering(721) 00:16:58.780 fused_ordering(722) 00:16:58.780 fused_ordering(723) 00:16:58.780 fused_ordering(724) 00:16:58.780 fused_ordering(725) 00:16:58.780 fused_ordering(726) 00:16:58.780 fused_ordering(727) 00:16:58.780 fused_ordering(728) 00:16:58.780 fused_ordering(729) 00:16:58.780 fused_ordering(730) 00:16:58.780 fused_ordering(731) 00:16:58.780 fused_ordering(732) 00:16:58.780 fused_ordering(733) 00:16:58.780 fused_ordering(734) 00:16:58.780 fused_ordering(735) 00:16:58.780 fused_ordering(736) 00:16:58.780 fused_ordering(737) 00:16:58.780 fused_ordering(738) 00:16:58.780 fused_ordering(739) 00:16:58.780 fused_ordering(740) 00:16:58.780 fused_ordering(741) 00:16:58.780 fused_ordering(742) 00:16:58.780 fused_ordering(743) 00:16:58.780 fused_ordering(744) 00:16:58.780 fused_ordering(745) 00:16:58.780 fused_ordering(746) 00:16:58.780 fused_ordering(747) 00:16:58.780 fused_ordering(748) 00:16:58.780 fused_ordering(749) 00:16:58.780 fused_ordering(750) 00:16:58.780 fused_ordering(751) 00:16:58.780 fused_ordering(752) 00:16:58.780 fused_ordering(753) 00:16:58.780 fused_ordering(754) 00:16:58.780 fused_ordering(755) 00:16:58.780 fused_ordering(756) 00:16:58.780 fused_ordering(757) 00:16:58.780 fused_ordering(758) 00:16:58.780 fused_ordering(759) 00:16:58.780 fused_ordering(760) 00:16:58.780 fused_ordering(761) 00:16:58.780 fused_ordering(762) 00:16:58.780 fused_ordering(763) 00:16:58.780 fused_ordering(764) 00:16:58.780 fused_ordering(765) 00:16:58.780 fused_ordering(766) 00:16:58.780 fused_ordering(767) 00:16:58.780 fused_ordering(768) 00:16:58.780 fused_ordering(769) 00:16:58.780 fused_ordering(770) 00:16:58.780 fused_ordering(771) 00:16:58.780 fused_ordering(772) 00:16:58.780 fused_ordering(773) 00:16:58.780 fused_ordering(774) 00:16:58.780 fused_ordering(775) 00:16:58.780 fused_ordering(776) 00:16:58.780 fused_ordering(777) 00:16:58.780 fused_ordering(778) 00:16:58.780 fused_ordering(779) 00:16:58.780 fused_ordering(780) 00:16:58.780 fused_ordering(781) 00:16:58.780 fused_ordering(782) 00:16:58.780 fused_ordering(783) 00:16:58.780 fused_ordering(784) 00:16:58.780 fused_ordering(785) 00:16:58.780 fused_ordering(786) 00:16:58.780 fused_ordering(787) 00:16:58.780 fused_ordering(788) 00:16:58.780 fused_ordering(789) 00:16:58.780 fused_ordering(790) 00:16:58.780 fused_ordering(791) 00:16:58.780 fused_ordering(792) 00:16:58.780 fused_ordering(793) 00:16:58.780 fused_ordering(794) 00:16:58.781 fused_ordering(795) 00:16:58.781 fused_ordering(796) 00:16:58.781 fused_ordering(797) 00:16:58.781 fused_ordering(798) 00:16:58.781 fused_ordering(799) 00:16:58.781 fused_ordering(800) 00:16:58.781 fused_ordering(801) 00:16:58.781 fused_ordering(802) 00:16:58.781 fused_ordering(803) 00:16:58.781 fused_ordering(804) 00:16:58.781 fused_ordering(805) 00:16:58.781 fused_ordering(806) 00:16:58.781 fused_ordering(807) 00:16:58.781 fused_ordering(808) 00:16:58.781 fused_ordering(809) 00:16:58.781 fused_ordering(810) 00:16:58.781 fused_ordering(811) 00:16:58.781 fused_ordering(812) 00:16:58.781 fused_ordering(813) 00:16:58.781 fused_ordering(814) 00:16:58.781 fused_ordering(815) 00:16:58.781 fused_ordering(816) 00:16:58.781 fused_ordering(817) 00:16:58.781 fused_ordering(818) 00:16:58.781 fused_ordering(819) 00:16:58.781 fused_ordering(820) 00:16:59.350 fused_ordering(821) 00:16:59.350 fused_ordering(822) 00:16:59.350 fused_ordering(823) 00:16:59.350 fused_ordering(824) 00:16:59.350 fused_ordering(825) 00:16:59.350 fused_ordering(826) 00:16:59.350 fused_ordering(827) 00:16:59.350 fused_ordering(828) 00:16:59.350 fused_ordering(829) 00:16:59.350 fused_ordering(830) 00:16:59.350 fused_ordering(831) 00:16:59.350 fused_ordering(832) 00:16:59.350 fused_ordering(833) 00:16:59.350 fused_ordering(834) 00:16:59.350 fused_ordering(835) 00:16:59.350 fused_ordering(836) 00:16:59.350 fused_ordering(837) 00:16:59.350 fused_ordering(838) 00:16:59.350 fused_ordering(839) 00:16:59.350 fused_ordering(840) 00:16:59.350 fused_ordering(841) 00:16:59.350 fused_ordering(842) 00:16:59.350 fused_ordering(843) 00:16:59.350 fused_ordering(844) 00:16:59.350 fused_ordering(845) 00:16:59.350 fused_ordering(846) 00:16:59.350 fused_ordering(847) 00:16:59.350 fused_ordering(848) 00:16:59.350 fused_ordering(849) 00:16:59.350 fused_ordering(850) 00:16:59.350 fused_ordering(851) 00:16:59.350 fused_ordering(852) 00:16:59.350 fused_ordering(853) 00:16:59.350 fused_ordering(854) 00:16:59.350 fused_ordering(855) 00:16:59.350 fused_ordering(856) 00:16:59.350 fused_ordering(857) 00:16:59.350 fused_ordering(858) 00:16:59.350 fused_ordering(859) 00:16:59.350 fused_ordering(860) 00:16:59.350 fused_ordering(861) 00:16:59.350 fused_ordering(862) 00:16:59.350 fused_ordering(863) 00:16:59.350 fused_ordering(864) 00:16:59.350 fused_ordering(865) 00:16:59.350 fused_ordering(866) 00:16:59.350 fused_ordering(867) 00:16:59.350 fused_ordering(868) 00:16:59.350 fused_ordering(869) 00:16:59.350 fused_ordering(870) 00:16:59.350 fused_ordering(871) 00:16:59.350 fused_ordering(872) 00:16:59.350 fused_ordering(873) 00:16:59.350 fused_ordering(874) 00:16:59.350 fused_ordering(875) 00:16:59.350 fused_ordering(876) 00:16:59.350 fused_ordering(877) 00:16:59.350 fused_ordering(878) 00:16:59.350 fused_ordering(879) 00:16:59.350 fused_ordering(880) 00:16:59.350 fused_ordering(881) 00:16:59.350 fused_ordering(882) 00:16:59.350 fused_ordering(883) 00:16:59.350 fused_ordering(884) 00:16:59.350 fused_ordering(885) 00:16:59.350 fused_ordering(886) 00:16:59.350 fused_ordering(887) 00:16:59.350 fused_ordering(888) 00:16:59.350 fused_ordering(889) 00:16:59.350 fused_ordering(890) 00:16:59.350 fused_ordering(891) 00:16:59.350 fused_ordering(892) 00:16:59.350 fused_ordering(893) 00:16:59.350 fused_ordering(894) 00:16:59.350 fused_ordering(895) 00:16:59.350 fused_ordering(896) 00:16:59.350 fused_ordering(897) 00:16:59.350 fused_ordering(898) 00:16:59.350 fused_ordering(899) 00:16:59.350 fused_ordering(900) 00:16:59.350 fused_ordering(901) 00:16:59.350 fused_ordering(902) 00:16:59.350 fused_ordering(903) 00:16:59.350 fused_ordering(904) 00:16:59.350 fused_ordering(905) 00:16:59.350 fused_ordering(906) 00:16:59.350 fused_ordering(907) 00:16:59.350 fused_ordering(908) 00:16:59.350 fused_ordering(909) 00:16:59.350 fused_ordering(910) 00:16:59.350 fused_ordering(911) 00:16:59.350 fused_ordering(912) 00:16:59.350 fused_ordering(913) 00:16:59.350 fused_ordering(914) 00:16:59.350 fused_ordering(915) 00:16:59.350 fused_ordering(916) 00:16:59.350 fused_ordering(917) 00:16:59.350 fused_ordering(918) 00:16:59.350 fused_ordering(919) 00:16:59.350 fused_ordering(920) 00:16:59.350 fused_ordering(921) 00:16:59.350 fused_ordering(922) 00:16:59.350 fused_ordering(923) 00:16:59.350 fused_ordering(924) 00:16:59.350 fused_ordering(925) 00:16:59.350 fused_ordering(926) 00:16:59.350 fused_ordering(927) 00:16:59.350 fused_ordering(928) 00:16:59.350 fused_ordering(929) 00:16:59.350 fused_ordering(930) 00:16:59.350 fused_ordering(931) 00:16:59.350 fused_ordering(932) 00:16:59.350 fused_ordering(933) 00:16:59.350 fused_ordering(934) 00:16:59.350 fused_ordering(935) 00:16:59.350 fused_ordering(936) 00:16:59.350 fused_ordering(937) 00:16:59.350 fused_ordering(938) 00:16:59.350 fused_ordering(939) 00:16:59.350 fused_ordering(940) 00:16:59.350 fused_ordering(941) 00:16:59.350 fused_ordering(942) 00:16:59.350 fused_ordering(943) 00:16:59.350 fused_ordering(944) 00:16:59.350 fused_ordering(945) 00:16:59.350 fused_ordering(946) 00:16:59.350 fused_ordering(947) 00:16:59.350 fused_ordering(948) 00:16:59.350 fused_ordering(949) 00:16:59.350 fused_ordering(950) 00:16:59.350 fused_ordering(951) 00:16:59.350 fused_ordering(952) 00:16:59.350 fused_ordering(953) 00:16:59.350 fused_ordering(954) 00:16:59.350 fused_ordering(955) 00:16:59.350 fused_ordering(956) 00:16:59.350 fused_ordering(957) 00:16:59.350 fused_ordering(958) 00:16:59.350 fused_ordering(959) 00:16:59.350 fused_ordering(960) 00:16:59.350 fused_ordering(961) 00:16:59.350 fused_ordering(962) 00:16:59.350 fused_ordering(963) 00:16:59.350 fused_ordering(964) 00:16:59.350 fused_ordering(965) 00:16:59.350 fused_ordering(966) 00:16:59.350 fused_ordering(967) 00:16:59.350 fused_ordering(968) 00:16:59.350 fused_ordering(969) 00:16:59.350 fused_ordering(970) 00:16:59.350 fused_ordering(971) 00:16:59.350 fused_ordering(972) 00:16:59.350 fused_ordering(973) 00:16:59.350 fused_ordering(974) 00:16:59.350 fused_ordering(975) 00:16:59.350 fused_ordering(976) 00:16:59.350 fused_ordering(977) 00:16:59.350 fused_ordering(978) 00:16:59.350 fused_ordering(979) 00:16:59.350 fused_ordering(980) 00:16:59.350 fused_ordering(981) 00:16:59.350 fused_ordering(982) 00:16:59.350 fused_ordering(983) 00:16:59.350 fused_ordering(984) 00:16:59.350 fused_ordering(985) 00:16:59.350 fused_ordering(986) 00:16:59.350 fused_ordering(987) 00:16:59.350 fused_ordering(988) 00:16:59.350 fused_ordering(989) 00:16:59.350 fused_ordering(990) 00:16:59.350 fused_ordering(991) 00:16:59.350 fused_ordering(992) 00:16:59.350 fused_ordering(993) 00:16:59.350 fused_ordering(994) 00:16:59.350 fused_ordering(995) 00:16:59.350 fused_ordering(996) 00:16:59.350 fused_ordering(997) 00:16:59.350 fused_ordering(998) 00:16:59.350 fused_ordering(999) 00:16:59.350 fused_ordering(1000) 00:16:59.350 fused_ordering(1001) 00:16:59.350 fused_ordering(1002) 00:16:59.350 fused_ordering(1003) 00:16:59.350 fused_ordering(1004) 00:16:59.350 fused_ordering(1005) 00:16:59.350 fused_ordering(1006) 00:16:59.350 fused_ordering(1007) 00:16:59.350 fused_ordering(1008) 00:16:59.350 fused_ordering(1009) 00:16:59.350 fused_ordering(1010) 00:16:59.350 fused_ordering(1011) 00:16:59.350 fused_ordering(1012) 00:16:59.350 fused_ordering(1013) 00:16:59.350 fused_ordering(1014) 00:16:59.350 fused_ordering(1015) 00:16:59.350 fused_ordering(1016) 00:16:59.350 fused_ordering(1017) 00:16:59.350 fused_ordering(1018) 00:16:59.350 fused_ordering(1019) 00:16:59.350 fused_ordering(1020) 00:16:59.350 fused_ordering(1021) 00:16:59.350 fused_ordering(1022) 00:16:59.350 fused_ordering(1023) 00:16:59.350 00:59:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:59.350 00:59:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:59.350 00:59:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:59.350 00:59:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:16:59.350 00:59:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:59.350 00:59:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:16:59.350 00:59:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:59.350 00:59:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:59.350 rmmod nvme_tcp 00:16:59.610 rmmod nvme_fabrics 00:16:59.610 rmmod nvme_keyring 00:16:59.610 00:59:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:59.610 00:59:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:16:59.610 00:59:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:16:59.610 00:59:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1814274 ']' 00:16:59.610 00:59:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1814274 00:16:59.610 00:59:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 1814274 ']' 00:16:59.610 00:59:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 1814274 00:16:59.610 00:59:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:16:59.610 00:59:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:59.610 00:59:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1814274 00:16:59.610 00:59:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:59.610 00:59:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:59.610 00:59:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1814274' 00:16:59.610 killing process with pid 1814274 00:16:59.610 00:59:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 1814274 00:16:59.611 00:59:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 1814274 00:16:59.871 00:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:59.871 00:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:59.871 00:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:59.871 00:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:59.871 00:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:59.871 00:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.871 00:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:59.871 00:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.780 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:01.780 00:17:01.780 real 0m7.550s 00:17:01.780 user 0m4.717s 00:17:01.780 sys 0m3.540s 00:17:01.780 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:01.780 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:01.780 ************************************ 00:17:01.780 END TEST nvmf_fused_ordering 00:17:01.780 ************************************ 00:17:01.780 00:59:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:01.780 00:59:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:01.780 00:59:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:01.780 00:59:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:01.780 ************************************ 00:17:01.780 START TEST nvmf_ns_masking 00:17:01.780 ************************************ 00:17:01.780 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:02.039 * Looking for test storage... 00:17:02.039 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:02.039 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:02.039 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:02.039 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:02.039 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:02.039 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:02.039 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:02.039 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:02.039 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:02.039 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:02.039 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:02.039 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:02.039 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:02.039 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:02.039 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:02.039 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:02.039 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:02.039 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:02.039 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:02.039 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:02.039 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:02.039 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:02.039 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:02.040 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.040 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.040 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.040 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:02.040 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.040 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:17:02.040 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:02.040 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:02.040 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:02.040 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:02.040 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:02.040 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:02.040 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:02.040 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:02.040 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:02.040 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:02.040 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:02.040 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:02.040 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=e8db1ea8-bb33-4a6f-9776-1ab49ee872de 00:17:02.040 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:02.040 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=1e471b74-49eb-45c2-8e1a-7fb4a39c8d53 00:17:02.040 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:02.040 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:02.040 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:02.040 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:02.040 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=2bb01170-e2a5-4738-909e-5f361f6bf535 00:17:02.040 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:02.040 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:02.040 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:02.040 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:02.040 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:02.040 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:02.040 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.040 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:02.040 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.040 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:02.040 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:02.040 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:17:02.040 00:59:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:03.944 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:03.944 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:17:03.944 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:03.944 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:03.944 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:03.944 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:03.944 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:03.944 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:17:03.944 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:03.944 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:17:03.944 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:17:03.944 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:17:03.944 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:17:03.944 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:17:03.944 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:17:03.944 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:03.944 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:03.944 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:03.944 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:03.944 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:03.944 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:03.944 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:03.944 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:03.944 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:03.944 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:03.944 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:03.944 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:03.944 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:03.944 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:03.944 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:03.944 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:03.944 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:03.945 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:03.945 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:03.945 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:03.945 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:03.945 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:04.204 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:04.204 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:04.204 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:04.204 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:17:04.204 00:17:04.204 --- 10.0.0.2 ping statistics --- 00:17:04.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.204 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:17:04.204 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:04.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:04.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:17:04.204 00:17:04.204 --- 10.0.0.1 ping statistics --- 00:17:04.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.204 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:17:04.204 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:04.204 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:17:04.204 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:04.204 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:04.204 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:04.204 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:04.204 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:04.204 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:04.204 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:04.204 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:04.204 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:04.205 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:04.205 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:04.205 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1816606 00:17:04.205 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:04.205 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1816606 00:17:04.205 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1816606 ']' 00:17:04.205 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.205 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:04.205 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.205 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:04.205 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:04.205 [2024-07-26 00:59:34.467121] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:17:04.205 [2024-07-26 00:59:34.467222] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:04.205 EAL: No free 2048 kB hugepages reported on node 1 00:17:04.205 [2024-07-26 00:59:34.542486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.463 [2024-07-26 00:59:34.638826] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:04.463 [2024-07-26 00:59:34.638880] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:04.463 [2024-07-26 00:59:34.638905] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:04.463 [2024-07-26 00:59:34.638926] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:04.463 [2024-07-26 00:59:34.638945] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:04.463 [2024-07-26 00:59:34.638981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.463 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:04.463 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:17:04.463 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:04.463 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:04.463 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:04.463 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:04.463 00:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:04.720 [2024-07-26 00:59:35.051544] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:04.720 00:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:04.720 00:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:04.720 00:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:04.978 Malloc1 00:17:04.978 00:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:05.544 Malloc2 00:17:05.544 00:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:05.544 00:59:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:05.801 00:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:06.061 [2024-07-26 00:59:36.393865] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:06.061 00:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:06.061 00:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2bb01170-e2a5-4738-909e-5f361f6bf535 -a 10.0.0.2 -s 4420 -i 4 00:17:06.320 00:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:06.320 00:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:06.320 00:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:06.320 00:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:06.320 00:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:08.224 00:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:08.224 00:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:08.225 00:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:08.225 00:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:08.225 00:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:08.225 00:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:08.225 00:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:08.225 00:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:08.484 00:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:08.484 00:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:08.484 00:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:08.484 00:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:08.484 00:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:08.484 [ 0]:0x1 00:17:08.484 00:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:08.484 00:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:08.484 00:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7214853d8ce5417c91ef5633daac441e 00:17:08.484 00:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7214853d8ce5417c91ef5633daac441e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:08.484 00:59:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:08.742 00:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:08.742 00:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:08.742 00:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:08.742 [ 0]:0x1 00:17:08.742 00:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:08.742 00:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:08.742 00:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7214853d8ce5417c91ef5633daac441e 00:17:08.742 00:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7214853d8ce5417c91ef5633daac441e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:08.742 00:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:08.742 00:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:08.742 00:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:08.742 [ 1]:0x2 00:17:08.742 00:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:08.742 00:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:08.742 00:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=19739ce9dc274dcfb530874e65ddc146 00:17:08.742 00:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 19739ce9dc274dcfb530874e65ddc146 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:08.742 00:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:08.742 00:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:09.000 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:09.000 00:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:09.258 00:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:09.517 00:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:09.517 00:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2bb01170-e2a5-4738-909e-5f361f6bf535 -a 10.0.0.2 -s 4420 -i 4 00:17:09.517 00:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:09.517 00:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:09.517 00:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:09.517 00:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:17:09.517 00:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:17:09.517 00:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:12.105 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:12.105 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:12.105 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:12.105 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:12.105 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:12.105 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:12.105 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:12.105 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:12.105 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:12.105 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:12.105 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:12.105 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:12.105 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:12.105 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:12.105 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:12.105 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:12.105 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:12.105 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:12.105 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:12.105 00:59:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:12.105 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:12.105 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:12.105 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:12.105 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:12.105 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:12.105 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:12.105 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:12.105 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:12.105 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:12.105 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:12.105 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:12.105 [ 0]:0x2 00:17:12.105 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:12.105 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:12.105 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=19739ce9dc274dcfb530874e65ddc146 00:17:12.105 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 19739ce9dc274dcfb530874e65ddc146 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:12.105 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:12.105 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:12.105 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:12.105 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:12.105 [ 0]:0x1 00:17:12.105 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:12.105 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:12.105 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7214853d8ce5417c91ef5633daac441e 00:17:12.105 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7214853d8ce5417c91ef5633daac441e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:12.105 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:12.105 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:12.105 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:12.105 [ 1]:0x2 00:17:12.105 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:12.105 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:12.364 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=19739ce9dc274dcfb530874e65ddc146 00:17:12.364 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 19739ce9dc274dcfb530874e65ddc146 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:12.364 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:12.622 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:12.622 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:12.622 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:12.622 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:12.622 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:12.622 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:12.622 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:12.622 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:12.622 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:12.622 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:12.622 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:12.622 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:12.622 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:12.622 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:12.622 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:12.622 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:12.622 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:12.622 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:12.622 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:12.622 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:12.622 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:12.622 [ 0]:0x2 00:17:12.622 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:12.622 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:12.622 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=19739ce9dc274dcfb530874e65ddc146 00:17:12.622 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 19739ce9dc274dcfb530874e65ddc146 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:12.622 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:12.622 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:12.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:12.622 00:59:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:12.880 00:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:12.880 00:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2bb01170-e2a5-4738-909e-5f361f6bf535 -a 10.0.0.2 -s 4420 -i 4 00:17:13.137 00:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:13.137 00:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:13.137 00:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:13.137 00:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:17:13.137 00:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:17:13.137 00:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:15.030 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:15.030 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:15.030 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:15.030 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:17:15.030 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:15.030 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:15.030 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:15.030 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:15.287 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:15.287 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:15.287 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:15.287 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:15.287 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:15.287 [ 0]:0x1 00:17:15.287 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:15.287 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:15.287 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7214853d8ce5417c91ef5633daac441e 00:17:15.287 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7214853d8ce5417c91ef5633daac441e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:15.287 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:15.287 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:15.288 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:15.288 [ 1]:0x2 00:17:15.288 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:15.288 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:15.288 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=19739ce9dc274dcfb530874e65ddc146 00:17:15.288 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 19739ce9dc274dcfb530874e65ddc146 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:15.288 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:15.545 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:15.545 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:15.545 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:15.545 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:15.545 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:15.545 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:15.545 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:15.545 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:15.545 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:15.545 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:15.545 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:15.545 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:15.803 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:15.803 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:15.803 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:15.803 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:15.803 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:15.803 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:15.803 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:15.803 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:15.803 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:15.803 [ 0]:0x2 00:17:15.803 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:15.803 00:59:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:15.803 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=19739ce9dc274dcfb530874e65ddc146 00:17:15.803 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 19739ce9dc274dcfb530874e65ddc146 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:15.803 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:15.803 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:15.803 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:15.803 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:15.803 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:15.803 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:15.803 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:15.803 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:15.803 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:15.803 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:15.803 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:15.803 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:16.060 [2024-07-26 00:59:46.276011] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:16.060 request: 00:17:16.060 { 00:17:16.060 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:16.060 "nsid": 2, 00:17:16.060 "host": "nqn.2016-06.io.spdk:host1", 00:17:16.060 "method": "nvmf_ns_remove_host", 00:17:16.060 "req_id": 1 00:17:16.060 } 00:17:16.060 Got JSON-RPC error response 00:17:16.060 response: 00:17:16.060 { 00:17:16.060 "code": -32602, 00:17:16.060 "message": "Invalid parameters" 00:17:16.060 } 00:17:16.060 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:16.060 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:16.060 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:16.060 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:16.060 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:16.060 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:16.060 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:16.060 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:16.060 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:16.060 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:16.060 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:16.060 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:16.060 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:16.060 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:16.060 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:16.060 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:16.060 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:16.060 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:16.060 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:16.060 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:16.061 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:16.061 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:16.061 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:16.061 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:16.061 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:16.061 [ 0]:0x2 00:17:16.061 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:16.061 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:16.061 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=19739ce9dc274dcfb530874e65ddc146 00:17:16.061 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 19739ce9dc274dcfb530874e65ddc146 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:16.061 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:16.061 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:16.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:16.318 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1818228 00:17:16.318 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:16.318 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:16.318 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1818228 /var/tmp/host.sock 00:17:16.318 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1818228 ']' 00:17:16.318 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:17:16.318 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:16.319 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:16.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:16.319 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:16.319 00:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:16.319 [2024-07-26 00:59:46.607572] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:17:16.319 [2024-07-26 00:59:46.607648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1818228 ] 00:17:16.319 EAL: No free 2048 kB hugepages reported on node 1 00:17:16.319 [2024-07-26 00:59:46.670257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.576 [2024-07-26 00:59:46.763924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:16.833 00:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:16.833 00:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:17:16.834 00:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:17.091 00:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:17.091 00:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid e8db1ea8-bb33-4a6f-9776-1ab49ee872de 00:17:17.091 00:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:17:17.348 00:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g E8DB1EA8BB334A6F97761AB49EE872DE -i 00:17:17.348 00:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 1e471b74-49eb-45c2-8e1a-7fb4a39c8d53 00:17:17.348 00:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:17:17.348 00:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 1E471B7449EB45C28E1A7FB4A39C8D53 -i 00:17:17.913 00:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:18.169 00:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:18.427 00:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:18.427 00:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:18.684 nvme0n1 00:17:18.684 00:59:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:18.684 00:59:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:19.250 nvme1n2 00:17:19.250 00:59:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:19.250 00:59:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:19.250 00:59:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:19.250 00:59:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:19.250 00:59:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:19.250 00:59:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:19.250 00:59:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:19.250 00:59:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:19.250 00:59:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:19.507 00:59:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ e8db1ea8-bb33-4a6f-9776-1ab49ee872de == \e\8\d\b\1\e\a\8\-\b\b\3\3\-\4\a\6\f\-\9\7\7\6\-\1\a\b\4\9\e\e\8\7\2\d\e ]] 00:17:19.507 00:59:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:19.507 00:59:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:19.507 00:59:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:19.765 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 1e471b74-49eb-45c2-8e1a-7fb4a39c8d53 == \1\e\4\7\1\b\7\4\-\4\9\e\b\-\4\5\c\2\-\8\e\1\a\-\7\f\b\4\a\3\9\c\8\d\5\3 ]] 00:17:19.765 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1818228 00:17:19.765 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1818228 ']' 00:17:19.765 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1818228 00:17:19.765 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:17:19.765 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:19.765 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1818228 00:17:20.022 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:20.022 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:20.022 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1818228' 00:17:20.022 killing process with pid 1818228 00:17:20.022 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1818228 00:17:20.022 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1818228 00:17:20.279 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:20.536 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:17:20.536 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:17:20.536 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:20.536 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:17:20.536 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:20.536 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:17:20.536 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:20.536 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:20.536 rmmod nvme_tcp 00:17:20.536 rmmod nvme_fabrics 00:17:20.536 rmmod nvme_keyring 00:17:20.536 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:20.536 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:17:20.536 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:17:20.536 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1816606 ']' 00:17:20.536 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1816606 00:17:20.536 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1816606 ']' 00:17:20.536 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1816606 00:17:20.536 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:17:20.536 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:20.536 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1816606 00:17:20.536 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:20.536 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:20.536 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1816606' 00:17:20.536 killing process with pid 1816606 00:17:20.536 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1816606 00:17:20.536 00:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1816606 00:17:20.794 00:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:20.794 00:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:20.794 00:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:20.794 00:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:20.794 00:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:20.794 00:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.794 00:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:20.794 00:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:23.326 00:17:23.326 real 0m21.082s 00:17:23.326 user 0m27.429s 00:17:23.326 sys 0m4.189s 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:23.326 ************************************ 00:17:23.326 END TEST nvmf_ns_masking 00:17:23.326 ************************************ 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:23.326 ************************************ 00:17:23.326 START TEST nvmf_nvme_cli 00:17:23.326 ************************************ 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:23.326 * Looking for test storage... 00:17:23.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:23.326 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:23.327 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:23.327 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:23.327 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:23.327 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:23.327 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.327 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:23.327 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.327 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:23.327 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:23.327 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:17:23.327 00:59:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:25.225 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:25.225 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:17:25.225 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:25.225 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:25.225 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:25.225 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:25.225 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:25.225 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:17:25.225 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:25.225 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:17:25.225 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:17:25.225 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:17:25.225 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:17:25.225 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:17:25.225 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:17:25.225 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:25.225 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:25.225 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:25.225 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:25.225 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:25.225 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:25.225 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:25.225 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:25.225 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:25.225 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:25.225 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:25.225 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:25.225 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:25.225 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:25.225 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:25.225 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:25.225 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:25.225 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:25.225 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:25.226 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:25.226 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:25.226 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:25.226 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:25.226 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:25.226 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:17:25.226 00:17:25.226 --- 10.0.0.2 ping statistics --- 00:17:25.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:25.226 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:25.226 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:25.226 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:17:25.226 00:17:25.226 --- 10.0.0.1 ping statistics --- 00:17:25.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:25.226 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1820715 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1820715 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 1820715 ']' 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:25.226 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:25.226 [2024-07-26 00:59:55.501433] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:17:25.226 [2024-07-26 00:59:55.501503] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:25.226 EAL: No free 2048 kB hugepages reported on node 1 00:17:25.226 [2024-07-26 00:59:55.565311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:25.226 [2024-07-26 00:59:55.651153] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:25.226 [2024-07-26 00:59:55.651203] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:25.226 [2024-07-26 00:59:55.651217] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:25.226 [2024-07-26 00:59:55.651229] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:25.226 [2024-07-26 00:59:55.651239] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:25.226 [2024-07-26 00:59:55.651288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:25.227 [2024-07-26 00:59:55.651348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:25.227 [2024-07-26 00:59:55.651394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:25.227 [2024-07-26 00:59:55.651396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.485 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:25.485 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:17:25.485 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:25.485 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:25.485 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:25.485 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:25.485 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:25.485 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.485 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:25.485 [2024-07-26 00:59:55.812648] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:25.485 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.485 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:25.485 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.485 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:25.485 Malloc0 00:17:25.485 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.485 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:25.485 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.485 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:25.485 Malloc1 00:17:25.485 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.485 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:25.485 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.485 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:25.485 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.485 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:25.485 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.485 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:25.485 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.485 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:25.485 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.485 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:25.485 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.485 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:25.485 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.485 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:25.485 [2024-07-26 00:59:55.898663] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:25.485 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.485 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:25.485 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.485 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:25.485 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.485 00:59:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:17:25.743 00:17:25.743 Discovery Log Number of Records 2, Generation counter 2 00:17:25.743 =====Discovery Log Entry 0====== 00:17:25.743 trtype: tcp 00:17:25.743 adrfam: ipv4 00:17:25.743 subtype: current discovery subsystem 00:17:25.743 treq: not required 00:17:25.743 portid: 0 00:17:25.743 trsvcid: 4420 00:17:25.743 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:25.743 traddr: 10.0.0.2 00:17:25.743 eflags: explicit discovery connections, duplicate discovery information 00:17:25.743 sectype: none 00:17:25.743 =====Discovery Log Entry 1====== 00:17:25.743 trtype: tcp 00:17:25.743 adrfam: ipv4 00:17:25.743 subtype: nvme subsystem 00:17:25.743 treq: not required 00:17:25.743 portid: 0 00:17:25.743 trsvcid: 4420 00:17:25.743 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:25.743 traddr: 10.0.0.2 00:17:25.743 eflags: none 00:17:25.743 sectype: none 00:17:25.743 00:59:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:25.743 00:59:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:25.743 00:59:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:17:25.743 00:59:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:25.743 00:59:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:17:25.743 00:59:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:17:25.743 00:59:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:25.743 00:59:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:17:25.743 00:59:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:25.743 00:59:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:25.743 00:59:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:26.309 00:59:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:26.309 00:59:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:17:26.309 00:59:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:26.309 00:59:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:17:26.309 00:59:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:17:26.309 00:59:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:17:28.832 00:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:28.832 00:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:28.832 00:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:28.832 00:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:17:28.832 00:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:28.832 00:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:17:28.832 00:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:28.832 00:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:17:28.832 00:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:28.832 00:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:17:28.832 00:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:17:28.832 00:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:28.832 00:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:17:28.832 00:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:28.832 00:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:28.832 00:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:17:28.832 00:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:28.832 00:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:28.832 00:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:17:28.832 00:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:28.832 00:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:17:28.832 /dev/nvme0n1 ]] 00:17:28.832 00:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:28.832 00:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:28.832 00:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:17:28.832 00:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:28.832 00:59:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:17:28.832 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:17:28.832 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:28.832 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:17:28.832 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:28.832 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:28.832 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:17:28.832 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:28.832 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:28.832 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:17:28.832 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:28.832 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:28.832 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:29.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:29.091 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:29.091 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:17:29.091 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:29.091 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:29.091 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:29.091 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:29.091 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:17:29.091 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:29.091 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:29.091 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.091 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:29.091 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.091 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:29.091 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:29.091 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:29.091 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:17:29.091 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:29.091 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:17:29.091 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:29.091 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:29.091 rmmod nvme_tcp 00:17:29.091 rmmod nvme_fabrics 00:17:29.091 rmmod nvme_keyring 00:17:29.091 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:29.091 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:17:29.091 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:17:29.091 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1820715 ']' 00:17:29.091 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1820715 00:17:29.091 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 1820715 ']' 00:17:29.091 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 1820715 00:17:29.091 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:17:29.091 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:29.091 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1820715 00:17:29.091 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:29.091 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:29.091 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1820715' 00:17:29.091 killing process with pid 1820715 00:17:29.091 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 1820715 00:17:29.091 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 1820715 00:17:29.350 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:29.350 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:29.350 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:29.350 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:29.350 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:29.350 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:29.350 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:29.350 00:59:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:31.880 00:17:31.880 real 0m8.435s 00:17:31.880 user 0m16.486s 00:17:31.880 sys 0m2.173s 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:31.880 ************************************ 00:17:31.880 END TEST nvmf_nvme_cli 00:17:31.880 ************************************ 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:31.880 ************************************ 00:17:31.880 START TEST nvmf_vfio_user 00:17:31.880 ************************************ 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:31.880 * Looking for test storage... 00:17:31.880 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1821631 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1821631' 00:17:31.880 Process pid: 1821631 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1821631 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1821631 ']' 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:31.880 01:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:31.880 [2024-07-26 01:00:01.904639] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:17:31.880 [2024-07-26 01:00:01.904755] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:31.880 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.880 [2024-07-26 01:00:01.964397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:31.880 [2024-07-26 01:00:02.058042] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:31.880 [2024-07-26 01:00:02.058103] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:31.880 [2024-07-26 01:00:02.058118] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:31.880 [2024-07-26 01:00:02.058131] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:31.880 [2024-07-26 01:00:02.058141] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:31.880 [2024-07-26 01:00:02.058206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.880 [2024-07-26 01:00:02.058234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:31.880 [2024-07-26 01:00:02.058282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:31.880 [2024-07-26 01:00:02.058285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.880 01:00:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:31.880 01:00:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:17:31.880 01:00:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:32.817 01:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:17:33.077 01:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:33.077 01:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:33.077 01:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:33.077 01:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:33.077 01:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:33.337 Malloc1 00:17:33.337 01:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:33.593 01:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:33.851 01:00:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:34.109 01:00:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:34.109 01:00:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:34.109 01:00:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:34.365 Malloc2 00:17:34.365 01:00:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:34.622 01:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:34.880 01:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:35.137 01:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:17:35.137 01:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:17:35.137 01:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:35.137 01:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:35.137 01:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:17:35.137 01:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:35.137 [2024-07-26 01:00:05.541684] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:17:35.137 [2024-07-26 01:00:05.541729] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1822169 ] 00:17:35.137 EAL: No free 2048 kB hugepages reported on node 1 00:17:35.395 [2024-07-26 01:00:05.576606] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:17:35.395 [2024-07-26 01:00:05.584554] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:35.395 [2024-07-26 01:00:05.584583] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f17cf188000 00:17:35.395 [2024-07-26 01:00:05.585548] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:35.395 [2024-07-26 01:00:05.586536] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:35.395 [2024-07-26 01:00:05.587543] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:35.395 [2024-07-26 01:00:05.588549] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:35.395 [2024-07-26 01:00:05.589557] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:35.395 [2024-07-26 01:00:05.590560] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:35.395 [2024-07-26 01:00:05.591559] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:35.395 [2024-07-26 01:00:05.592571] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:35.395 [2024-07-26 01:00:05.593580] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:35.395 [2024-07-26 01:00:05.593601] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f17cdf3c000 00:17:35.396 [2024-07-26 01:00:05.594727] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:35.396 [2024-07-26 01:00:05.610759] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:17:35.396 [2024-07-26 01:00:05.610795] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:17:35.396 [2024-07-26 01:00:05.615717] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:35.396 [2024-07-26 01:00:05.615772] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:35.396 [2024-07-26 01:00:05.615858] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:17:35.396 [2024-07-26 01:00:05.615885] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:17:35.396 [2024-07-26 01:00:05.615894] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:17:35.396 [2024-07-26 01:00:05.616708] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:17:35.396 [2024-07-26 01:00:05.616731] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:17:35.396 [2024-07-26 01:00:05.616744] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:17:35.396 [2024-07-26 01:00:05.617714] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:35.396 [2024-07-26 01:00:05.617732] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:17:35.396 [2024-07-26 01:00:05.617745] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:17:35.396 [2024-07-26 01:00:05.618716] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:17:35.396 [2024-07-26 01:00:05.618733] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:35.396 [2024-07-26 01:00:05.619723] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:17:35.396 [2024-07-26 01:00:05.619742] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:17:35.396 [2024-07-26 01:00:05.619750] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:17:35.396 [2024-07-26 01:00:05.619766] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:35.396 [2024-07-26 01:00:05.619876] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:17:35.396 [2024-07-26 01:00:05.619884] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:35.396 [2024-07-26 01:00:05.619892] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:17:35.396 [2024-07-26 01:00:05.620737] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:17:35.396 [2024-07-26 01:00:05.621737] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:17:35.396 [2024-07-26 01:00:05.622744] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:35.396 [2024-07-26 01:00:05.623744] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:35.396 [2024-07-26 01:00:05.623854] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:35.396 [2024-07-26 01:00:05.624767] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:17:35.396 [2024-07-26 01:00:05.624784] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:35.396 [2024-07-26 01:00:05.624792] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:17:35.396 [2024-07-26 01:00:05.624815] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:17:35.396 [2024-07-26 01:00:05.624828] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:17:35.396 [2024-07-26 01:00:05.624851] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:35.396 [2024-07-26 01:00:05.624860] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:35.396 [2024-07-26 01:00:05.624866] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:35.396 [2024-07-26 01:00:05.624884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:35.396 [2024-07-26 01:00:05.624951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:35.396 [2024-07-26 01:00:05.624966] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:17:35.396 [2024-07-26 01:00:05.624974] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:17:35.396 [2024-07-26 01:00:05.624982] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:17:35.396 [2024-07-26 01:00:05.624989] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:35.396 [2024-07-26 01:00:05.624996] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:17:35.396 [2024-07-26 01:00:05.625004] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:17:35.396 [2024-07-26 01:00:05.625015] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:17:35.396 [2024-07-26 01:00:05.625027] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:17:35.396 [2024-07-26 01:00:05.625069] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:35.396 [2024-07-26 01:00:05.625094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:35.396 [2024-07-26 01:00:05.625117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:35.396 [2024-07-26 01:00:05.625131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:35.396 [2024-07-26 01:00:05.625142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:35.396 [2024-07-26 01:00:05.625154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:35.396 [2024-07-26 01:00:05.625163] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:35.396 [2024-07-26 01:00:05.625178] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:35.396 [2024-07-26 01:00:05.625192] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:35.396 [2024-07-26 01:00:05.625206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:35.396 [2024-07-26 01:00:05.625216] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:17:35.396 [2024-07-26 01:00:05.625224] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:35.396 [2024-07-26 01:00:05.625239] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:17:35.396 [2024-07-26 01:00:05.625249] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:35.396 [2024-07-26 01:00:05.625262] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:35.396 [2024-07-26 01:00:05.625276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:35.396 [2024-07-26 01:00:05.625355] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:17:35.396 [2024-07-26 01:00:05.625371] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:35.396 [2024-07-26 01:00:05.625384] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:35.396 [2024-07-26 01:00:05.625392] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:35.396 [2024-07-26 01:00:05.625398] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:35.396 [2024-07-26 01:00:05.625407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:35.396 [2024-07-26 01:00:05.625422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:35.396 [2024-07-26 01:00:05.625441] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:17:35.396 [2024-07-26 01:00:05.625456] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:17:35.396 [2024-07-26 01:00:05.625470] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:17:35.396 [2024-07-26 01:00:05.625481] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:35.396 [2024-07-26 01:00:05.625488] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:35.396 [2024-07-26 01:00:05.625494] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:35.396 [2024-07-26 01:00:05.625503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:35.396 [2024-07-26 01:00:05.625529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:35.397 [2024-07-26 01:00:05.625550] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:35.397 [2024-07-26 01:00:05.625563] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:35.397 [2024-07-26 01:00:05.625574] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:35.397 [2024-07-26 01:00:05.625582] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:35.397 [2024-07-26 01:00:05.625588] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:35.397 [2024-07-26 01:00:05.625597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:35.397 [2024-07-26 01:00:05.625610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:35.397 [2024-07-26 01:00:05.625623] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:35.397 [2024-07-26 01:00:05.625633] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:17:35.397 [2024-07-26 01:00:05.625646] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:17:35.397 [2024-07-26 01:00:05.625659] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:35.397 [2024-07-26 01:00:05.625667] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:35.397 [2024-07-26 01:00:05.625675] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:17:35.397 [2024-07-26 01:00:05.625683] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:17:35.397 [2024-07-26 01:00:05.625691] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:17:35.397 [2024-07-26 01:00:05.625699] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:17:35.397 [2024-07-26 01:00:05.625724] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:35.397 [2024-07-26 01:00:05.625741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:35.397 [2024-07-26 01:00:05.625763] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:35.397 [2024-07-26 01:00:05.625778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:35.397 [2024-07-26 01:00:05.625793] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:35.397 [2024-07-26 01:00:05.625804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:35.397 [2024-07-26 01:00:05.625820] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:35.397 [2024-07-26 01:00:05.625830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:35.397 [2024-07-26 01:00:05.625851] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:35.397 [2024-07-26 01:00:05.625861] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:35.397 [2024-07-26 01:00:05.625867] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:35.397 [2024-07-26 01:00:05.625872] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:35.397 [2024-07-26 01:00:05.625878] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:35.397 [2024-07-26 01:00:05.625887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:35.397 [2024-07-26 01:00:05.625898] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:35.397 [2024-07-26 01:00:05.625905] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:35.397 [2024-07-26 01:00:05.625911] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:35.397 [2024-07-26 01:00:05.625919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:35.397 [2024-07-26 01:00:05.625929] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:35.397 [2024-07-26 01:00:05.625937] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:35.397 [2024-07-26 01:00:05.625943] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:35.397 [2024-07-26 01:00:05.625951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:35.397 [2024-07-26 01:00:05.625962] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:35.397 [2024-07-26 01:00:05.625969] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:35.397 [2024-07-26 01:00:05.625975] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:35.397 [2024-07-26 01:00:05.625983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:35.397 [2024-07-26 01:00:05.625994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:35.397 [2024-07-26 01:00:05.626013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:35.397 [2024-07-26 01:00:05.626031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:35.397 [2024-07-26 01:00:05.626043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:35.397 ===================================================== 00:17:35.397 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:35.397 ===================================================== 00:17:35.397 Controller Capabilities/Features 00:17:35.397 ================================ 00:17:35.397 Vendor ID: 4e58 00:17:35.397 Subsystem Vendor ID: 4e58 00:17:35.397 Serial Number: SPDK1 00:17:35.397 Model Number: SPDK bdev Controller 00:17:35.397 Firmware Version: 24.09 00:17:35.397 Recommended Arb Burst: 6 00:17:35.397 IEEE OUI Identifier: 8d 6b 50 00:17:35.397 Multi-path I/O 00:17:35.397 May have multiple subsystem ports: Yes 00:17:35.397 May have multiple controllers: Yes 00:17:35.397 Associated with SR-IOV VF: No 00:17:35.397 Max Data Transfer Size: 131072 00:17:35.397 Max Number of Namespaces: 32 00:17:35.397 Max Number of I/O Queues: 127 00:17:35.397 NVMe Specification Version (VS): 1.3 00:17:35.397 NVMe Specification Version (Identify): 1.3 00:17:35.397 Maximum Queue Entries: 256 00:17:35.397 Contiguous Queues Required: Yes 00:17:35.397 Arbitration Mechanisms Supported 00:17:35.397 Weighted Round Robin: Not Supported 00:17:35.397 Vendor Specific: Not Supported 00:17:35.397 Reset Timeout: 15000 ms 00:17:35.397 Doorbell Stride: 4 bytes 00:17:35.397 NVM Subsystem Reset: Not Supported 00:17:35.397 Command Sets Supported 00:17:35.397 NVM Command Set: Supported 00:17:35.397 Boot Partition: Not Supported 00:17:35.397 Memory Page Size Minimum: 4096 bytes 00:17:35.397 Memory Page Size Maximum: 4096 bytes 00:17:35.397 Persistent Memory Region: Not Supported 00:17:35.397 Optional Asynchronous Events Supported 00:17:35.397 Namespace Attribute Notices: Supported 00:17:35.397 Firmware Activation Notices: Not Supported 00:17:35.397 ANA Change Notices: Not Supported 00:17:35.397 PLE Aggregate Log Change Notices: Not Supported 00:17:35.397 LBA Status Info Alert Notices: Not Supported 00:17:35.397 EGE Aggregate Log Change Notices: Not Supported 00:17:35.397 Normal NVM Subsystem Shutdown event: Not Supported 00:17:35.397 Zone Descriptor Change Notices: Not Supported 00:17:35.397 Discovery Log Change Notices: Not Supported 00:17:35.397 Controller Attributes 00:17:35.397 128-bit Host Identifier: Supported 00:17:35.397 Non-Operational Permissive Mode: Not Supported 00:17:35.397 NVM Sets: Not Supported 00:17:35.397 Read Recovery Levels: Not Supported 00:17:35.397 Endurance Groups: Not Supported 00:17:35.397 Predictable Latency Mode: Not Supported 00:17:35.397 Traffic Based Keep ALive: Not Supported 00:17:35.397 Namespace Granularity: Not Supported 00:17:35.397 SQ Associations: Not Supported 00:17:35.397 UUID List: Not Supported 00:17:35.397 Multi-Domain Subsystem: Not Supported 00:17:35.397 Fixed Capacity Management: Not Supported 00:17:35.397 Variable Capacity Management: Not Supported 00:17:35.397 Delete Endurance Group: Not Supported 00:17:35.397 Delete NVM Set: Not Supported 00:17:35.397 Extended LBA Formats Supported: Not Supported 00:17:35.397 Flexible Data Placement Supported: Not Supported 00:17:35.397 00:17:35.397 Controller Memory Buffer Support 00:17:35.397 ================================ 00:17:35.397 Supported: No 00:17:35.397 00:17:35.397 Persistent Memory Region Support 00:17:35.397 ================================ 00:17:35.397 Supported: No 00:17:35.397 00:17:35.397 Admin Command Set Attributes 00:17:35.397 ============================ 00:17:35.397 Security Send/Receive: Not Supported 00:17:35.397 Format NVM: Not Supported 00:17:35.397 Firmware Activate/Download: Not Supported 00:17:35.397 Namespace Management: Not Supported 00:17:35.397 Device Self-Test: Not Supported 00:17:35.397 Directives: Not Supported 00:17:35.397 NVMe-MI: Not Supported 00:17:35.397 Virtualization Management: Not Supported 00:17:35.397 Doorbell Buffer Config: Not Supported 00:17:35.398 Get LBA Status Capability: Not Supported 00:17:35.398 Command & Feature Lockdown Capability: Not Supported 00:17:35.398 Abort Command Limit: 4 00:17:35.398 Async Event Request Limit: 4 00:17:35.398 Number of Firmware Slots: N/A 00:17:35.398 Firmware Slot 1 Read-Only: N/A 00:17:35.398 Firmware Activation Without Reset: N/A 00:17:35.398 Multiple Update Detection Support: N/A 00:17:35.398 Firmware Update Granularity: No Information Provided 00:17:35.398 Per-Namespace SMART Log: No 00:17:35.398 Asymmetric Namespace Access Log Page: Not Supported 00:17:35.398 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:17:35.398 Command Effects Log Page: Supported 00:17:35.398 Get Log Page Extended Data: Supported 00:17:35.398 Telemetry Log Pages: Not Supported 00:17:35.398 Persistent Event Log Pages: Not Supported 00:17:35.398 Supported Log Pages Log Page: May Support 00:17:35.398 Commands Supported & Effects Log Page: Not Supported 00:17:35.398 Feature Identifiers & Effects Log Page:May Support 00:17:35.398 NVMe-MI Commands & Effects Log Page: May Support 00:17:35.398 Data Area 4 for Telemetry Log: Not Supported 00:17:35.398 Error Log Page Entries Supported: 128 00:17:35.398 Keep Alive: Supported 00:17:35.398 Keep Alive Granularity: 10000 ms 00:17:35.398 00:17:35.398 NVM Command Set Attributes 00:17:35.398 ========================== 00:17:35.398 Submission Queue Entry Size 00:17:35.398 Max: 64 00:17:35.398 Min: 64 00:17:35.398 Completion Queue Entry Size 00:17:35.398 Max: 16 00:17:35.398 Min: 16 00:17:35.398 Number of Namespaces: 32 00:17:35.398 Compare Command: Supported 00:17:35.398 Write Uncorrectable Command: Not Supported 00:17:35.398 Dataset Management Command: Supported 00:17:35.398 Write Zeroes Command: Supported 00:17:35.398 Set Features Save Field: Not Supported 00:17:35.398 Reservations: Not Supported 00:17:35.398 Timestamp: Not Supported 00:17:35.398 Copy: Supported 00:17:35.398 Volatile Write Cache: Present 00:17:35.398 Atomic Write Unit (Normal): 1 00:17:35.398 Atomic Write Unit (PFail): 1 00:17:35.398 Atomic Compare & Write Unit: 1 00:17:35.398 Fused Compare & Write: Supported 00:17:35.398 Scatter-Gather List 00:17:35.398 SGL Command Set: Supported (Dword aligned) 00:17:35.398 SGL Keyed: Not Supported 00:17:35.398 SGL Bit Bucket Descriptor: Not Supported 00:17:35.398 SGL Metadata Pointer: Not Supported 00:17:35.398 Oversized SGL: Not Supported 00:17:35.398 SGL Metadata Address: Not Supported 00:17:35.398 SGL Offset: Not Supported 00:17:35.398 Transport SGL Data Block: Not Supported 00:17:35.398 Replay Protected Memory Block: Not Supported 00:17:35.398 00:17:35.398 Firmware Slot Information 00:17:35.398 ========================= 00:17:35.398 Active slot: 1 00:17:35.398 Slot 1 Firmware Revision: 24.09 00:17:35.398 00:17:35.398 00:17:35.398 Commands Supported and Effects 00:17:35.398 ============================== 00:17:35.398 Admin Commands 00:17:35.398 -------------- 00:17:35.398 Get Log Page (02h): Supported 00:17:35.398 Identify (06h): Supported 00:17:35.398 Abort (08h): Supported 00:17:35.398 Set Features (09h): Supported 00:17:35.398 Get Features (0Ah): Supported 00:17:35.398 Asynchronous Event Request (0Ch): Supported 00:17:35.398 Keep Alive (18h): Supported 00:17:35.398 I/O Commands 00:17:35.398 ------------ 00:17:35.398 Flush (00h): Supported LBA-Change 00:17:35.398 Write (01h): Supported LBA-Change 00:17:35.398 Read (02h): Supported 00:17:35.398 Compare (05h): Supported 00:17:35.398 Write Zeroes (08h): Supported LBA-Change 00:17:35.398 Dataset Management (09h): Supported LBA-Change 00:17:35.398 Copy (19h): Supported LBA-Change 00:17:35.398 00:17:35.398 Error Log 00:17:35.398 ========= 00:17:35.398 00:17:35.398 Arbitration 00:17:35.398 =========== 00:17:35.398 Arbitration Burst: 1 00:17:35.398 00:17:35.398 Power Management 00:17:35.398 ================ 00:17:35.398 Number of Power States: 1 00:17:35.398 Current Power State: Power State #0 00:17:35.398 Power State #0: 00:17:35.398 Max Power: 0.00 W 00:17:35.398 Non-Operational State: Operational 00:17:35.398 Entry Latency: Not Reported 00:17:35.398 Exit Latency: Not Reported 00:17:35.398 Relative Read Throughput: 0 00:17:35.398 Relative Read Latency: 0 00:17:35.398 Relative Write Throughput: 0 00:17:35.398 Relative Write Latency: 0 00:17:35.398 Idle Power: Not Reported 00:17:35.398 Active Power: Not Reported 00:17:35.398 Non-Operational Permissive Mode: Not Supported 00:17:35.398 00:17:35.398 Health Information 00:17:35.398 ================== 00:17:35.398 Critical Warnings: 00:17:35.398 Available Spare Space: OK 00:17:35.398 Temperature: OK 00:17:35.398 Device Reliability: OK 00:17:35.398 Read Only: No 00:17:35.398 Volatile Memory Backup: OK 00:17:35.398 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:35.398 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:35.398 Available Spare: 0% 00:17:35.398 Available Sp[2024-07-26 01:00:05.626172] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:35.398 [2024-07-26 01:00:05.626191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:35.398 [2024-07-26 01:00:05.626234] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:17:35.398 [2024-07-26 01:00:05.626251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.398 [2024-07-26 01:00:05.626261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.398 [2024-07-26 01:00:05.626271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.398 [2024-07-26 01:00:05.626280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.398 [2024-07-26 01:00:05.630087] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:35.398 [2024-07-26 01:00:05.630108] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:17:35.398 [2024-07-26 01:00:05.630792] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:35.398 [2024-07-26 01:00:05.630869] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:17:35.398 [2024-07-26 01:00:05.630882] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:17:35.398 [2024-07-26 01:00:05.631804] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:17:35.398 [2024-07-26 01:00:05.631826] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:17:35.398 [2024-07-26 01:00:05.631878] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:17:35.398 [2024-07-26 01:00:05.633844] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:35.398 are Threshold: 0% 00:17:35.398 Life Percentage Used: 0% 00:17:35.398 Data Units Read: 0 00:17:35.398 Data Units Written: 0 00:17:35.398 Host Read Commands: 0 00:17:35.398 Host Write Commands: 0 00:17:35.398 Controller Busy Time: 0 minutes 00:17:35.398 Power Cycles: 0 00:17:35.398 Power On Hours: 0 hours 00:17:35.398 Unsafe Shutdowns: 0 00:17:35.398 Unrecoverable Media Errors: 0 00:17:35.398 Lifetime Error Log Entries: 0 00:17:35.398 Warning Temperature Time: 0 minutes 00:17:35.398 Critical Temperature Time: 0 minutes 00:17:35.398 00:17:35.398 Number of Queues 00:17:35.398 ================ 00:17:35.398 Number of I/O Submission Queues: 127 00:17:35.398 Number of I/O Completion Queues: 127 00:17:35.398 00:17:35.398 Active Namespaces 00:17:35.398 ================= 00:17:35.398 Namespace ID:1 00:17:35.398 Error Recovery Timeout: Unlimited 00:17:35.398 Command Set Identifier: NVM (00h) 00:17:35.398 Deallocate: Supported 00:17:35.398 Deallocated/Unwritten Error: Not Supported 00:17:35.398 Deallocated Read Value: Unknown 00:17:35.398 Deallocate in Write Zeroes: Not Supported 00:17:35.398 Deallocated Guard Field: 0xFFFF 00:17:35.398 Flush: Supported 00:17:35.398 Reservation: Supported 00:17:35.398 Namespace Sharing Capabilities: Multiple Controllers 00:17:35.398 Size (in LBAs): 131072 (0GiB) 00:17:35.398 Capacity (in LBAs): 131072 (0GiB) 00:17:35.398 Utilization (in LBAs): 131072 (0GiB) 00:17:35.398 NGUID: A6F6E8586CF84E70A935AA9232032C30 00:17:35.398 UUID: a6f6e858-6cf8-4e70-a935-aa9232032c30 00:17:35.398 Thin Provisioning: Not Supported 00:17:35.398 Per-NS Atomic Units: Yes 00:17:35.398 Atomic Boundary Size (Normal): 0 00:17:35.398 Atomic Boundary Size (PFail): 0 00:17:35.399 Atomic Boundary Offset: 0 00:17:35.399 Maximum Single Source Range Length: 65535 00:17:35.399 Maximum Copy Length: 65535 00:17:35.399 Maximum Source Range Count: 1 00:17:35.399 NGUID/EUI64 Never Reused: No 00:17:35.399 Namespace Write Protected: No 00:17:35.399 Number of LBA Formats: 1 00:17:35.399 Current LBA Format: LBA Format #00 00:17:35.399 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:35.399 00:17:35.399 01:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:35.399 EAL: No free 2048 kB hugepages reported on node 1 00:17:35.655 [2024-07-26 01:00:05.864928] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:40.916 Initializing NVMe Controllers 00:17:40.916 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:40.916 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:40.916 Initialization complete. Launching workers. 00:17:40.916 ======================================================== 00:17:40.917 Latency(us) 00:17:40.917 Device Information : IOPS MiB/s Average min max 00:17:40.917 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33936.56 132.56 3771.11 1184.79 7508.52 00:17:40.917 ======================================================== 00:17:40.917 Total : 33936.56 132.56 3771.11 1184.79 7508.52 00:17:40.917 00:17:40.917 [2024-07-26 01:00:10.884474] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:40.917 01:00:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:40.917 EAL: No free 2048 kB hugepages reported on node 1 00:17:40.917 [2024-07-26 01:00:11.126634] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:46.185 Initializing NVMe Controllers 00:17:46.185 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:46.185 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:46.185 Initialization complete. Launching workers. 00:17:46.185 ======================================================== 00:17:46.185 Latency(us) 00:17:46.185 Device Information : IOPS MiB/s Average min max 00:17:46.185 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16042.55 62.67 7978.10 6965.62 8110.42 00:17:46.185 ======================================================== 00:17:46.185 Total : 16042.55 62.67 7978.10 6965.62 8110.42 00:17:46.185 00:17:46.185 [2024-07-26 01:00:16.163414] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:46.185 01:00:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:46.185 EAL: No free 2048 kB hugepages reported on node 1 00:17:46.185 [2024-07-26 01:00:16.384532] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:51.460 [2024-07-26 01:00:21.453391] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:51.460 Initializing NVMe Controllers 00:17:51.460 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:51.460 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:51.460 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:17:51.460 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:17:51.460 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:17:51.460 Initialization complete. Launching workers. 00:17:51.460 Starting thread on core 2 00:17:51.460 Starting thread on core 3 00:17:51.460 Starting thread on core 1 00:17:51.460 01:00:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:17:51.460 EAL: No free 2048 kB hugepages reported on node 1 00:17:51.460 [2024-07-26 01:00:21.749596] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:54.749 [2024-07-26 01:00:24.970629] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:54.749 Initializing NVMe Controllers 00:17:54.749 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:54.749 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:54.749 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:17:54.749 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:17:54.749 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:17:54.749 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:17:54.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:54.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:54.749 Initialization complete. Launching workers. 00:17:54.749 Starting thread on core 1 with urgent priority queue 00:17:54.749 Starting thread on core 2 with urgent priority queue 00:17:54.749 Starting thread on core 3 with urgent priority queue 00:17:54.749 Starting thread on core 0 with urgent priority queue 00:17:54.749 SPDK bdev Controller (SPDK1 ) core 0: 3694.00 IO/s 27.07 secs/100000 ios 00:17:54.749 SPDK bdev Controller (SPDK1 ) core 1: 3713.67 IO/s 26.93 secs/100000 ios 00:17:54.749 SPDK bdev Controller (SPDK1 ) core 2: 3455.67 IO/s 28.94 secs/100000 ios 00:17:54.749 SPDK bdev Controller (SPDK1 ) core 3: 3977.33 IO/s 25.14 secs/100000 ios 00:17:54.749 ======================================================== 00:17:54.749 00:17:54.749 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:54.749 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.007 [2024-07-26 01:00:25.259566] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:55.007 Initializing NVMe Controllers 00:17:55.007 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:55.007 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:55.007 Namespace ID: 1 size: 0GB 00:17:55.007 Initialization complete. 00:17:55.007 INFO: using host memory buffer for IO 00:17:55.007 Hello world! 00:17:55.007 [2024-07-26 01:00:25.294125] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:55.007 01:00:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:55.007 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.265 [2024-07-26 01:00:25.581510] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:56.201 Initializing NVMe Controllers 00:17:56.201 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:56.201 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:56.201 Initialization complete. Launching workers. 00:17:56.201 submit (in ns) avg, min, max = 8430.7, 3534.4, 4015684.4 00:17:56.201 complete (in ns) avg, min, max = 26711.3, 2061.1, 4024208.9 00:17:56.201 00:17:56.201 Submit histogram 00:17:56.201 ================ 00:17:56.201 Range in us Cumulative Count 00:17:56.201 3.532 - 3.556: 0.2167% ( 29) 00:17:56.201 3.556 - 3.579: 1.2554% ( 139) 00:17:56.201 3.579 - 3.603: 3.6093% ( 315) 00:17:56.201 3.603 - 3.627: 8.2873% ( 626) 00:17:56.201 3.627 - 3.650: 17.0004% ( 1166) 00:17:56.201 3.650 - 3.674: 27.6939% ( 1431) 00:17:56.201 3.674 - 3.698: 37.4010% ( 1299) 00:17:56.201 3.698 - 3.721: 46.0843% ( 1162) 00:17:56.201 3.721 - 3.745: 52.3464% ( 838) 00:17:56.201 3.745 - 3.769: 58.0108% ( 758) 00:17:56.201 3.769 - 3.793: 62.6364% ( 619) 00:17:56.201 3.793 - 3.816: 66.7015% ( 544) 00:17:56.201 3.816 - 3.840: 70.3482% ( 488) 00:17:56.201 3.840 - 3.864: 73.5764% ( 432) 00:17:56.201 3.864 - 3.887: 77.0363% ( 463) 00:17:56.201 3.887 - 3.911: 80.3692% ( 446) 00:17:56.201 3.911 - 3.935: 83.6572% ( 440) 00:17:56.201 3.935 - 3.959: 86.5715% ( 390) 00:17:56.201 3.959 - 3.982: 88.5294% ( 262) 00:17:56.201 3.982 - 4.006: 90.4424% ( 256) 00:17:56.201 4.006 - 4.030: 91.8697% ( 191) 00:17:56.201 4.030 - 4.053: 93.3194% ( 194) 00:17:56.201 4.053 - 4.077: 94.4627% ( 153) 00:17:56.201 4.077 - 4.101: 95.3295% ( 116) 00:17:56.201 4.101 - 4.124: 95.8153% ( 65) 00:17:56.201 4.124 - 4.148: 96.2861% ( 63) 00:17:56.201 4.148 - 4.172: 96.5924% ( 41) 00:17:56.201 4.172 - 4.196: 96.7643% ( 23) 00:17:56.201 4.196 - 4.219: 96.9063% ( 19) 00:17:56.201 4.219 - 4.243: 96.9885% ( 11) 00:17:56.201 4.243 - 4.267: 97.0931% ( 14) 00:17:56.201 4.267 - 4.290: 97.1678% ( 10) 00:17:56.201 4.290 - 4.314: 97.2426% ( 10) 00:17:56.201 4.314 - 4.338: 97.3248% ( 11) 00:17:56.201 4.338 - 4.361: 97.3995% ( 10) 00:17:56.201 4.361 - 4.385: 97.4369% ( 5) 00:17:56.201 4.385 - 4.409: 97.4966% ( 8) 00:17:56.201 4.409 - 4.433: 97.5265% ( 4) 00:17:56.201 4.433 - 4.456: 97.5564% ( 4) 00:17:56.201 4.456 - 4.480: 97.5714% ( 2) 00:17:56.201 4.480 - 4.504: 97.5863% ( 2) 00:17:56.201 4.504 - 4.527: 97.6162% ( 4) 00:17:56.201 4.527 - 4.551: 97.6237% ( 1) 00:17:56.201 4.551 - 4.575: 97.6386% ( 2) 00:17:56.201 4.599 - 4.622: 97.6461% ( 1) 00:17:56.201 4.622 - 4.646: 97.6760% ( 4) 00:17:56.201 4.646 - 4.670: 97.6909% ( 2) 00:17:56.201 4.670 - 4.693: 97.7133% ( 3) 00:17:56.201 4.693 - 4.717: 97.7432% ( 4) 00:17:56.201 4.717 - 4.741: 97.7731% ( 4) 00:17:56.201 4.741 - 4.764: 97.8105% ( 5) 00:17:56.201 4.764 - 4.788: 97.8553% ( 6) 00:17:56.201 4.788 - 4.812: 97.8852% ( 4) 00:17:56.201 4.812 - 4.836: 97.9002% ( 2) 00:17:56.201 4.836 - 4.859: 97.9226% ( 3) 00:17:56.201 4.859 - 4.883: 97.9749% ( 7) 00:17:56.201 4.883 - 4.907: 98.0048% ( 4) 00:17:56.201 4.907 - 4.930: 98.0421% ( 5) 00:17:56.201 4.930 - 4.954: 98.0646% ( 3) 00:17:56.201 4.954 - 4.978: 98.0945% ( 4) 00:17:56.201 4.978 - 5.001: 98.1019% ( 1) 00:17:56.201 5.001 - 5.025: 98.1094% ( 1) 00:17:56.201 5.025 - 5.049: 98.1243% ( 2) 00:17:56.201 5.049 - 5.073: 98.1692% ( 6) 00:17:56.201 5.073 - 5.096: 98.1841% ( 2) 00:17:56.201 5.096 - 5.120: 98.2065% ( 3) 00:17:56.201 5.120 - 5.144: 98.2215% ( 2) 00:17:56.201 5.144 - 5.167: 98.2439% ( 3) 00:17:56.201 5.167 - 5.191: 98.2514% ( 1) 00:17:56.201 5.191 - 5.215: 98.2589% ( 1) 00:17:56.201 5.215 - 5.239: 98.2663% ( 1) 00:17:56.201 5.239 - 5.262: 98.2813% ( 2) 00:17:56.201 5.286 - 5.310: 98.2962% ( 2) 00:17:56.201 5.333 - 5.357: 98.3037% ( 1) 00:17:56.201 5.357 - 5.381: 98.3112% ( 1) 00:17:56.201 5.428 - 5.452: 98.3186% ( 1) 00:17:56.201 5.452 - 5.476: 98.3261% ( 1) 00:17:56.201 5.618 - 5.641: 98.3336% ( 1) 00:17:56.201 5.665 - 5.689: 98.3411% ( 1) 00:17:56.201 5.713 - 5.736: 98.3485% ( 1) 00:17:56.201 5.736 - 5.760: 98.3635% ( 2) 00:17:56.201 5.760 - 5.784: 98.3709% ( 1) 00:17:56.201 5.784 - 5.807: 98.3784% ( 1) 00:17:56.201 5.831 - 5.855: 98.3859% ( 1) 00:17:56.201 5.855 - 5.879: 98.4158% ( 4) 00:17:56.201 5.950 - 5.973: 98.4233% ( 1) 00:17:56.201 5.997 - 6.021: 98.4307% ( 1) 00:17:56.201 6.116 - 6.163: 98.4382% ( 1) 00:17:56.201 6.163 - 6.210: 98.4457% ( 1) 00:17:56.201 6.353 - 6.400: 98.4531% ( 1) 00:17:56.201 6.447 - 6.495: 98.4606% ( 1) 00:17:56.201 6.495 - 6.542: 98.4681% ( 1) 00:17:56.201 6.590 - 6.637: 98.4830% ( 2) 00:17:56.201 6.684 - 6.732: 98.4905% ( 1) 00:17:56.201 6.874 - 6.921: 98.4980% ( 1) 00:17:56.201 7.064 - 7.111: 98.5055% ( 1) 00:17:56.201 7.206 - 7.253: 98.5129% ( 1) 00:17:56.201 7.301 - 7.348: 98.5204% ( 1) 00:17:56.201 7.348 - 7.396: 98.5279% ( 1) 00:17:56.201 7.490 - 7.538: 98.5578% ( 4) 00:17:56.201 7.538 - 7.585: 98.5727% ( 2) 00:17:56.201 7.633 - 7.680: 98.5802% ( 1) 00:17:56.201 7.775 - 7.822: 98.5877% ( 1) 00:17:56.201 7.822 - 7.870: 98.5951% ( 1) 00:17:56.201 7.870 - 7.917: 98.6175% ( 3) 00:17:56.201 7.917 - 7.964: 98.6250% ( 1) 00:17:56.201 8.012 - 8.059: 98.6325% ( 1) 00:17:56.201 8.154 - 8.201: 98.6400% ( 1) 00:17:56.201 8.344 - 8.391: 98.6474% ( 1) 00:17:56.201 8.439 - 8.486: 98.6549% ( 1) 00:17:56.201 8.486 - 8.533: 98.6624% ( 1) 00:17:56.201 8.628 - 8.676: 98.6773% ( 2) 00:17:56.201 8.676 - 8.723: 98.6848% ( 1) 00:17:56.201 8.770 - 8.818: 98.6923% ( 1) 00:17:56.201 8.865 - 8.913: 98.6997% ( 1) 00:17:56.201 8.913 - 8.960: 98.7072% ( 1) 00:17:56.201 9.055 - 9.102: 98.7147% ( 1) 00:17:56.201 9.102 - 9.150: 98.7222% ( 1) 00:17:56.201 9.244 - 9.292: 98.7296% ( 1) 00:17:56.201 9.481 - 9.529: 98.7371% ( 1) 00:17:56.201 9.908 - 9.956: 98.7446% ( 1) 00:17:56.201 10.050 - 10.098: 98.7521% ( 1) 00:17:56.201 10.098 - 10.145: 98.7595% ( 1) 00:17:56.201 10.193 - 10.240: 98.7670% ( 1) 00:17:56.201 10.240 - 10.287: 98.7745% ( 1) 00:17:56.201 10.335 - 10.382: 98.7819% ( 1) 00:17:56.201 10.382 - 10.430: 98.7894% ( 1) 00:17:56.201 10.477 - 10.524: 98.7969% ( 1) 00:17:56.201 10.524 - 10.572: 98.8044% ( 1) 00:17:56.201 10.619 - 10.667: 98.8118% ( 1) 00:17:56.201 10.714 - 10.761: 98.8268% ( 2) 00:17:56.201 11.046 - 11.093: 98.8343% ( 1) 00:17:56.201 11.236 - 11.283: 98.8417% ( 1) 00:17:56.201 11.283 - 11.330: 98.8492% ( 1) 00:17:56.201 11.330 - 11.378: 98.8641% ( 2) 00:17:56.201 11.425 - 11.473: 98.8791% ( 2) 00:17:56.201 11.804 - 11.852: 98.8866% ( 1) 00:17:56.201 12.089 - 12.136: 98.8940% ( 1) 00:17:56.201 12.610 - 12.705: 98.9015% ( 1) 00:17:56.201 12.705 - 12.800: 98.9090% ( 1) 00:17:56.201 12.800 - 12.895: 98.9239% ( 2) 00:17:56.201 12.895 - 12.990: 98.9314% ( 1) 00:17:56.201 13.084 - 13.179: 98.9389% ( 1) 00:17:56.201 13.274 - 13.369: 98.9463% ( 1) 00:17:56.201 13.369 - 13.464: 98.9538% ( 1) 00:17:56.201 13.464 - 13.559: 98.9613% ( 1) 00:17:56.201 13.559 - 13.653: 98.9762% ( 2) 00:17:56.201 13.843 - 13.938: 98.9837% ( 1) 00:17:56.201 13.938 - 14.033: 98.9912% ( 1) 00:17:56.201 14.033 - 14.127: 98.9987% ( 1) 00:17:56.201 14.507 - 14.601: 99.0061% ( 1) 00:17:56.201 14.696 - 14.791: 99.0136% ( 1) 00:17:56.201 14.886 - 14.981: 99.0211% ( 1) 00:17:56.201 17.161 - 17.256: 99.0285% ( 1) 00:17:56.202 17.256 - 17.351: 99.0435% ( 2) 00:17:56.202 17.351 - 17.446: 99.0584% ( 2) 00:17:56.202 17.446 - 17.541: 99.0958% ( 5) 00:17:56.202 17.541 - 17.636: 99.1631% ( 9) 00:17:56.202 17.636 - 17.730: 99.1855% ( 3) 00:17:56.202 17.730 - 17.825: 99.2303% ( 6) 00:17:56.202 17.825 - 17.920: 99.2602% ( 4) 00:17:56.202 17.920 - 18.015: 99.2976% ( 5) 00:17:56.202 18.015 - 18.110: 99.3573% ( 8) 00:17:56.202 18.110 - 18.204: 99.4171% ( 8) 00:17:56.202 18.204 - 18.299: 99.4694% ( 7) 00:17:56.202 18.299 - 18.394: 99.5217% ( 7) 00:17:56.202 18.394 - 18.489: 99.5815% ( 8) 00:17:56.202 18.489 - 18.584: 99.6264% ( 6) 00:17:56.202 18.584 - 18.679: 99.6787% ( 7) 00:17:56.202 18.679 - 18.773: 99.7160% ( 5) 00:17:56.202 18.773 - 18.868: 99.7235% ( 1) 00:17:56.202 18.868 - 18.963: 99.7310% ( 1) 00:17:56.202 18.963 - 19.058: 99.7683% ( 5) 00:17:56.202 19.058 - 19.153: 99.7908% ( 3) 00:17:56.202 19.153 - 19.247: 99.7982% ( 1) 00:17:56.202 19.342 - 19.437: 99.8132% ( 2) 00:17:56.202 19.437 - 19.532: 99.8431% ( 4) 00:17:56.202 19.721 - 19.816: 99.8505% ( 1) 00:17:56.202 19.816 - 19.911: 99.8655% ( 2) 00:17:56.202 20.006 - 20.101: 99.8730% ( 1) 00:17:56.202 24.273 - 24.462: 99.8804% ( 1) 00:17:56.202 29.393 - 29.582: 99.8879% ( 1) 00:17:56.202 3980.705 - 4004.978: 99.9701% ( 11) 00:17:56.202 4004.978 - 4029.250: 100.0000% ( 4) 00:17:56.202 00:17:56.202 Complete histogram 00:17:56.202 ================== 00:17:56.202 Range in us Cumulative Count 00:17:56.202 2.050 - 2.062: 0.0075% ( 1) 00:17:56.202 2.062 - 2.074: 6.2547% ( 836) 00:17:56.202 2.074 - 2.086: 35.5328% ( 3918) 00:17:56.202 2.086 - 2.098: 38.6938% ( 423) 00:17:56.202 2.098 - 2.110: 49.0285% ( 1383) 00:17:56.202 2.110 - 2.121: 63.1221% ( 1886) 00:17:56.202 2.121 - 2.133: 65.0052% ( 252) 00:17:56.202 2.133 - 2.145: 71.3720% ( 852) 00:17:56.202 2.145 - 2.157: 77.9555% ( 881) 00:17:56.202 2.157 - 2.169: 78.7849% ( 111) 00:17:56.202 2.169 - 2.181: 84.3222% ( 741) 00:17:56.202 2.181 - 2.193: 88.3575% ( 540) 00:17:56.202 2.193 - 2.204: 88.9329% ( 77) 00:17:56.202 2.204 - 2.216: 89.7923% ( 115) 00:17:56.202 2.216 - 2.228: 91.9668% ( 291) 00:17:56.202 2.228 - 2.240: 93.5137% ( 207) 00:17:56.202 2.240 - 2.252: 94.3656% ( 114) 00:17:56.202 2.252 - 2.264: 95.1278% ( 102) 00:17:56.202 2.264 - 2.276: 95.3221% ( 26) 00:17:56.202 2.276 - 2.287: 95.5388% ( 29) 00:17:56.202 2.287 - 2.299: 95.8153% ( 37) 00:17:56.202 2.299 - 2.311: 96.0693% ( 34) 00:17:56.202 2.311 - 2.323: 96.1889% ( 16) 00:17:56.202 2.323 - 2.335: 96.2412% ( 7) 00:17:56.202 2.335 - 2.347: 96.3757% ( 18) 00:17:56.202 2.347 - 2.359: 96.5700% ( 26) 00:17:56.202 2.359 - 2.370: 96.8839% ( 42) 00:17:56.202 2.370 - 2.382: 97.1977% ( 42) 00:17:56.202 2.382 - 2.394: 97.6237% ( 57) 00:17:56.202 2.394 - 2.406: 97.8404% ( 29) 00:17:56.202 2.406 - 2.418: 98.0347% ( 26) 00:17:56.202 2.418 - 2.430: 98.1916% ( 21) 00:17:56.202 2.430 - 2.441: 98.3112% ( 16) 00:17:56.202 2.441 - 2.453: 98.3934% ( 11) 00:17:56.202 2.453 - 2.465: 98.4756% ( 11) 00:17:56.202 2.465 - 2.477: 98.5279% ( 7) 00:17:56.202 2.477 - 2.489: 98.5503% ( 3) 00:17:56.202 2.489 - 2.501: 98.5578% ( 1) 00:17:56.202 2.501 - 2.513: 98.5877% ( 4) 00:17:56.202 2.513 - 2.524: 98.6026% ( 2) 00:17:56.202 2.524 - 2.536: 98.6101% ( 1) 00:17:56.202 2.536 - 2.548: 98.6175% ( 1) 00:17:56.202 2.572 - 2.584: 98.6250% ( 1) 00:17:56.202 2.714 - 2.726: 98.6400% ( 2) 00:17:56.202 2.750 - 2.761: 98.6474% ( 1) 00:17:56.202 2.833 - 2.844: 98.6549% ( 1) 00:17:56.202 3.105 - 3.129: 98.6624% ( 1) 00:17:56.202 3.200 - 3.224: 98.6699% ( 1) 00:17:56.202 3.224 - 3.247: 98.6773% ( 1) 00:17:56.202 3.247 - 3.271: 98.6923% ( 2) 00:17:56.202 3.271 - 3.295: 98.7147% ( 3) 00:17:56.202 3.295 - 3.319: 98.7222% ( 1) 00:17:56.202 3.319 - 3.342: 98.7371% ( 2) 00:17:56.202 3.342 - 3.366: 98.7446% ( 1) 00:17:56.202 3.366 - 3.390: 98.7521% ( 1) 00:17:56.202 3.484 - 3.508: 98.7595% ( 1) 00:17:56.202 3.532 - 3.556: 98.7670% ( 1) 00:17:56.202 3.556 - 3.579: 98.7819% ( 2) 00:17:56.202 3.627 - 3.650: 98.7894% ( 1) 00:17:56.202 3.650 - 3.674: 98.7969% ( 1) 00:17:56.202 3.816 - 3.840: 98.8118% ( 2) 00:17:56.202 3.982 - 4.006: 98.8193% ( 1) 00:17:56.202 5.191 - 5.215: 98.8268% ( 1) 00:17:56.202 5.476 - 5.499: 98.8343% ( 1) 00:17:56.202 5.570 - 5.594: 98.8417% ( 1) 00:17:56.202 5.618 - 5.641: 98.8492% ( 1) 00:17:56.202 5.665 - 5.689: 98.8567% ( 1) 00:17:56.202 5.689 - 5.713: 9[2024-07-26 01:00:26.604622] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:56.460 8.8641% ( 1) 00:17:56.460 5.713 - 5.736: 98.8716% ( 1) 00:17:56.460 5.736 - 5.760: 98.8791% ( 1) 00:17:56.460 5.950 - 5.973: 98.8866% ( 1) 00:17:56.460 5.973 - 5.997: 98.9015% ( 2) 00:17:56.460 6.021 - 6.044: 98.9090% ( 1) 00:17:56.460 6.044 - 6.068: 98.9165% ( 1) 00:17:56.460 6.210 - 6.258: 98.9239% ( 1) 00:17:56.460 6.447 - 6.495: 98.9314% ( 1) 00:17:56.460 6.495 - 6.542: 98.9389% ( 1) 00:17:56.460 6.827 - 6.874: 98.9538% ( 2) 00:17:56.460 6.921 - 6.969: 98.9613% ( 1) 00:17:56.460 6.969 - 7.016: 98.9688% ( 1) 00:17:56.460 7.016 - 7.064: 98.9762% ( 1) 00:17:56.460 7.159 - 7.206: 98.9837% ( 1) 00:17:56.460 7.206 - 7.253: 98.9912% ( 1) 00:17:56.460 7.301 - 7.348: 98.9987% ( 1) 00:17:56.460 7.396 - 7.443: 99.0061% ( 1) 00:17:56.460 8.059 - 8.107: 99.0136% ( 1) 00:17:56.460 8.818 - 8.865: 99.0211% ( 1) 00:17:56.460 15.644 - 15.739: 99.0435% ( 3) 00:17:56.460 15.834 - 15.929: 99.0734% ( 4) 00:17:56.460 15.929 - 16.024: 99.1033% ( 4) 00:17:56.460 16.024 - 16.119: 99.1182% ( 2) 00:17:56.460 16.119 - 16.213: 99.1257% ( 1) 00:17:56.460 16.213 - 16.308: 99.1406% ( 2) 00:17:56.460 16.308 - 16.403: 99.1481% ( 1) 00:17:56.460 16.403 - 16.498: 99.1855% ( 5) 00:17:56.460 16.498 - 16.593: 99.2453% ( 8) 00:17:56.460 16.593 - 16.687: 99.2527% ( 1) 00:17:56.460 16.687 - 16.782: 99.2901% ( 5) 00:17:56.460 16.782 - 16.877: 99.2976% ( 1) 00:17:56.460 16.877 - 16.972: 99.3125% ( 2) 00:17:56.460 17.067 - 17.161: 99.3275% ( 2) 00:17:56.461 17.161 - 17.256: 99.3499% ( 3) 00:17:56.461 17.541 - 17.636: 99.3573% ( 1) 00:17:56.461 17.825 - 17.920: 99.3723% ( 2) 00:17:56.461 18.015 - 18.110: 99.3798% ( 1) 00:17:56.461 18.679 - 18.773: 99.3872% ( 1) 00:17:56.461 3980.705 - 4004.978: 99.8356% ( 60) 00:17:56.461 4004.978 - 4029.250: 100.0000% ( 22) 00:17:56.461 00:17:56.461 01:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:17:56.461 01:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:56.461 01:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:17:56.461 01:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:17:56.461 01:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:56.719 [ 00:17:56.719 { 00:17:56.719 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:56.719 "subtype": "Discovery", 00:17:56.719 "listen_addresses": [], 00:17:56.719 "allow_any_host": true, 00:17:56.719 "hosts": [] 00:17:56.719 }, 00:17:56.719 { 00:17:56.719 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:56.719 "subtype": "NVMe", 00:17:56.719 "listen_addresses": [ 00:17:56.719 { 00:17:56.719 "trtype": "VFIOUSER", 00:17:56.719 "adrfam": "IPv4", 00:17:56.719 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:56.719 "trsvcid": "0" 00:17:56.719 } 00:17:56.719 ], 00:17:56.719 "allow_any_host": true, 00:17:56.719 "hosts": [], 00:17:56.719 "serial_number": "SPDK1", 00:17:56.719 "model_number": "SPDK bdev Controller", 00:17:56.719 "max_namespaces": 32, 00:17:56.719 "min_cntlid": 1, 00:17:56.719 "max_cntlid": 65519, 00:17:56.719 "namespaces": [ 00:17:56.719 { 00:17:56.719 "nsid": 1, 00:17:56.719 "bdev_name": "Malloc1", 00:17:56.719 "name": "Malloc1", 00:17:56.719 "nguid": "A6F6E8586CF84E70A935AA9232032C30", 00:17:56.719 "uuid": "a6f6e858-6cf8-4e70-a935-aa9232032c30" 00:17:56.719 } 00:17:56.719 ] 00:17:56.719 }, 00:17:56.719 { 00:17:56.719 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:56.719 "subtype": "NVMe", 00:17:56.719 "listen_addresses": [ 00:17:56.719 { 00:17:56.719 "trtype": "VFIOUSER", 00:17:56.719 "adrfam": "IPv4", 00:17:56.719 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:56.719 "trsvcid": "0" 00:17:56.719 } 00:17:56.719 ], 00:17:56.719 "allow_any_host": true, 00:17:56.719 "hosts": [], 00:17:56.719 "serial_number": "SPDK2", 00:17:56.719 "model_number": "SPDK bdev Controller", 00:17:56.719 "max_namespaces": 32, 00:17:56.719 "min_cntlid": 1, 00:17:56.719 "max_cntlid": 65519, 00:17:56.719 "namespaces": [ 00:17:56.719 { 00:17:56.719 "nsid": 1, 00:17:56.719 "bdev_name": "Malloc2", 00:17:56.719 "name": "Malloc2", 00:17:56.719 "nguid": "19926401784D4BB28F8C35CACEE4BAF3", 00:17:56.719 "uuid": "19926401-784d-4bb2-8f8c-35cacee4baf3" 00:17:56.719 } 00:17:56.719 ] 00:17:56.719 } 00:17:56.719 ] 00:17:56.719 01:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:56.719 01:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1825191 00:17:56.719 01:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:17:56.719 01:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:56.719 01:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:17:56.719 01:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:56.719 01:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:56.719 01:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:17:56.719 01:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:56.719 01:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:17:56.719 EAL: No free 2048 kB hugepages reported on node 1 00:17:56.719 [2024-07-26 01:00:27.071558] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:56.977 Malloc3 00:17:56.977 01:00:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:17:57.235 [2024-07-26 01:00:27.432180] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:57.235 01:00:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:57.235 Asynchronous Event Request test 00:17:57.235 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:57.235 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:57.235 Registering asynchronous event callbacks... 00:17:57.235 Starting namespace attribute notice tests for all controllers... 00:17:57.235 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:57.235 aer_cb - Changed Namespace 00:17:57.235 Cleaning up... 00:17:57.540 [ 00:17:57.540 { 00:17:57.540 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:57.540 "subtype": "Discovery", 00:17:57.540 "listen_addresses": [], 00:17:57.540 "allow_any_host": true, 00:17:57.540 "hosts": [] 00:17:57.540 }, 00:17:57.540 { 00:17:57.540 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:57.540 "subtype": "NVMe", 00:17:57.540 "listen_addresses": [ 00:17:57.540 { 00:17:57.540 "trtype": "VFIOUSER", 00:17:57.540 "adrfam": "IPv4", 00:17:57.540 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:57.540 "trsvcid": "0" 00:17:57.540 } 00:17:57.540 ], 00:17:57.540 "allow_any_host": true, 00:17:57.540 "hosts": [], 00:17:57.540 "serial_number": "SPDK1", 00:17:57.540 "model_number": "SPDK bdev Controller", 00:17:57.540 "max_namespaces": 32, 00:17:57.540 "min_cntlid": 1, 00:17:57.540 "max_cntlid": 65519, 00:17:57.540 "namespaces": [ 00:17:57.540 { 00:17:57.540 "nsid": 1, 00:17:57.540 "bdev_name": "Malloc1", 00:17:57.540 "name": "Malloc1", 00:17:57.540 "nguid": "A6F6E8586CF84E70A935AA9232032C30", 00:17:57.540 "uuid": "a6f6e858-6cf8-4e70-a935-aa9232032c30" 00:17:57.540 }, 00:17:57.540 { 00:17:57.540 "nsid": 2, 00:17:57.540 "bdev_name": "Malloc3", 00:17:57.540 "name": "Malloc3", 00:17:57.540 "nguid": "994F8C7FD27C428DB6DE1F05EF2FD397", 00:17:57.540 "uuid": "994f8c7f-d27c-428d-b6de-1f05ef2fd397" 00:17:57.540 } 00:17:57.540 ] 00:17:57.540 }, 00:17:57.540 { 00:17:57.540 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:57.540 "subtype": "NVMe", 00:17:57.540 "listen_addresses": [ 00:17:57.540 { 00:17:57.540 "trtype": "VFIOUSER", 00:17:57.540 "adrfam": "IPv4", 00:17:57.541 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:57.541 "trsvcid": "0" 00:17:57.541 } 00:17:57.541 ], 00:17:57.541 "allow_any_host": true, 00:17:57.541 "hosts": [], 00:17:57.541 "serial_number": "SPDK2", 00:17:57.541 "model_number": "SPDK bdev Controller", 00:17:57.541 "max_namespaces": 32, 00:17:57.541 "min_cntlid": 1, 00:17:57.541 "max_cntlid": 65519, 00:17:57.541 "namespaces": [ 00:17:57.541 { 00:17:57.541 "nsid": 1, 00:17:57.541 "bdev_name": "Malloc2", 00:17:57.541 "name": "Malloc2", 00:17:57.541 "nguid": "19926401784D4BB28F8C35CACEE4BAF3", 00:17:57.541 "uuid": "19926401-784d-4bb2-8f8c-35cacee4baf3" 00:17:57.541 } 00:17:57.541 ] 00:17:57.541 } 00:17:57.541 ] 00:17:57.541 01:00:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1825191 00:17:57.541 01:00:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:57.541 01:00:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:57.541 01:00:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:17:57.541 01:00:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:57.541 [2024-07-26 01:00:27.715078] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:17:57.541 [2024-07-26 01:00:27.715125] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1825203 ] 00:17:57.541 EAL: No free 2048 kB hugepages reported on node 1 00:17:57.541 [2024-07-26 01:00:27.748157] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:17:57.541 [2024-07-26 01:00:27.757365] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:57.541 [2024-07-26 01:00:27.757410] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f6749717000 00:17:57.541 [2024-07-26 01:00:27.758375] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:57.541 [2024-07-26 01:00:27.759378] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:57.541 [2024-07-26 01:00:27.760386] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:57.541 [2024-07-26 01:00:27.761378] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:57.541 [2024-07-26 01:00:27.762388] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:57.541 [2024-07-26 01:00:27.763398] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:57.541 [2024-07-26 01:00:27.764425] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:57.541 [2024-07-26 01:00:27.765429] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:57.541 [2024-07-26 01:00:27.766453] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:57.541 [2024-07-26 01:00:27.766476] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f67484cb000 00:17:57.541 [2024-07-26 01:00:27.767595] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:57.541 [2024-07-26 01:00:27.786368] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:17:57.541 [2024-07-26 01:00:27.786403] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:17:57.541 [2024-07-26 01:00:27.788507] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:57.541 [2024-07-26 01:00:27.788556] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:57.541 [2024-07-26 01:00:27.788640] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:17:57.541 [2024-07-26 01:00:27.788664] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:17:57.541 [2024-07-26 01:00:27.788674] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:17:57.541 [2024-07-26 01:00:27.789513] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:17:57.541 [2024-07-26 01:00:27.789539] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:17:57.541 [2024-07-26 01:00:27.789553] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:17:57.541 [2024-07-26 01:00:27.790520] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:57.541 [2024-07-26 01:00:27.790541] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:17:57.541 [2024-07-26 01:00:27.790553] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:17:57.541 [2024-07-26 01:00:27.791524] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:17:57.541 [2024-07-26 01:00:27.791543] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:57.541 [2024-07-26 01:00:27.792533] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:17:57.541 [2024-07-26 01:00:27.792553] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:17:57.541 [2024-07-26 01:00:27.792562] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:17:57.541 [2024-07-26 01:00:27.792573] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:57.541 [2024-07-26 01:00:27.792683] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:17:57.541 [2024-07-26 01:00:27.792691] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:57.541 [2024-07-26 01:00:27.792699] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:17:57.541 [2024-07-26 01:00:27.793545] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:17:57.541 [2024-07-26 01:00:27.794550] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:17:57.541 [2024-07-26 01:00:27.795552] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:57.541 [2024-07-26 01:00:27.796549] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:57.541 [2024-07-26 01:00:27.796612] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:57.541 [2024-07-26 01:00:27.797570] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:17:57.541 [2024-07-26 01:00:27.797589] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:57.541 [2024-07-26 01:00:27.797602] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:17:57.541 [2024-07-26 01:00:27.797626] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:17:57.541 [2024-07-26 01:00:27.797642] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:17:57.541 [2024-07-26 01:00:27.797660] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:57.541 [2024-07-26 01:00:27.797669] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:57.541 [2024-07-26 01:00:27.797676] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:57.541 [2024-07-26 01:00:27.797693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:57.541 [2024-07-26 01:00:27.804072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:57.541 [2024-07-26 01:00:27.804095] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:17:57.541 [2024-07-26 01:00:27.804104] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:17:57.541 [2024-07-26 01:00:27.804112] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:17:57.541 [2024-07-26 01:00:27.804120] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:57.541 [2024-07-26 01:00:27.804127] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:17:57.541 [2024-07-26 01:00:27.804135] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:17:57.541 [2024-07-26 01:00:27.804143] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:17:57.541 [2024-07-26 01:00:27.804156] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:17:57.541 [2024-07-26 01:00:27.804176] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:57.541 [2024-07-26 01:00:27.812068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:57.541 [2024-07-26 01:00:27.812096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.542 [2024-07-26 01:00:27.812127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.542 [2024-07-26 01:00:27.812140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.542 [2024-07-26 01:00:27.812151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.542 [2024-07-26 01:00:27.812161] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:17:57.542 [2024-07-26 01:00:27.812176] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:57.542 [2024-07-26 01:00:27.812191] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:57.542 [2024-07-26 01:00:27.820072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:57.542 [2024-07-26 01:00:27.820095] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:17:57.542 [2024-07-26 01:00:27.820106] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:57.542 [2024-07-26 01:00:27.820122] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:17:57.542 [2024-07-26 01:00:27.820132] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:17:57.542 [2024-07-26 01:00:27.820147] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:57.542 [2024-07-26 01:00:27.828079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:57.542 [2024-07-26 01:00:27.828156] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:17:57.542 [2024-07-26 01:00:27.828173] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:17:57.542 [2024-07-26 01:00:27.828186] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:57.542 [2024-07-26 01:00:27.828195] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:57.542 [2024-07-26 01:00:27.828202] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:57.542 [2024-07-26 01:00:27.828212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:57.542 [2024-07-26 01:00:27.836068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:57.542 [2024-07-26 01:00:27.836091] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:17:57.542 [2024-07-26 01:00:27.836107] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:17:57.542 [2024-07-26 01:00:27.836122] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:17:57.542 [2024-07-26 01:00:27.836136] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:57.542 [2024-07-26 01:00:27.836144] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:57.542 [2024-07-26 01:00:27.836151] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:57.542 [2024-07-26 01:00:27.836161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:57.542 [2024-07-26 01:00:27.844071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:57.542 [2024-07-26 01:00:27.844100] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:57.542 [2024-07-26 01:00:27.844117] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:57.542 [2024-07-26 01:00:27.844130] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:57.542 [2024-07-26 01:00:27.844139] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:57.542 [2024-07-26 01:00:27.844146] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:57.542 [2024-07-26 01:00:27.844159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:57.542 [2024-07-26 01:00:27.852070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:57.542 [2024-07-26 01:00:27.852092] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:57.542 [2024-07-26 01:00:27.852105] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:17:57.542 [2024-07-26 01:00:27.852124] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:17:57.542 [2024-07-26 01:00:27.852137] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:17:57.542 [2024-07-26 01:00:27.852146] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:57.542 [2024-07-26 01:00:27.852155] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:17:57.542 [2024-07-26 01:00:27.852164] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:17:57.542 [2024-07-26 01:00:27.852172] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:17:57.542 [2024-07-26 01:00:27.852180] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:17:57.542 [2024-07-26 01:00:27.852205] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:57.542 [2024-07-26 01:00:27.860072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:57.542 [2024-07-26 01:00:27.860098] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:57.542 [2024-07-26 01:00:27.868084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:57.542 [2024-07-26 01:00:27.868109] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:57.542 [2024-07-26 01:00:27.876073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:57.542 [2024-07-26 01:00:27.876098] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:57.542 [2024-07-26 01:00:27.884070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:57.542 [2024-07-26 01:00:27.884101] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:57.542 [2024-07-26 01:00:27.884113] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:57.542 [2024-07-26 01:00:27.884119] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:57.542 [2024-07-26 01:00:27.884125] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:57.542 [2024-07-26 01:00:27.884131] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:57.542 [2024-07-26 01:00:27.884141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:57.542 [2024-07-26 01:00:27.884152] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:57.542 [2024-07-26 01:00:27.884160] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:57.542 [2024-07-26 01:00:27.884170] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:57.542 [2024-07-26 01:00:27.884179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:57.542 [2024-07-26 01:00:27.884190] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:57.542 [2024-07-26 01:00:27.884198] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:57.542 [2024-07-26 01:00:27.884204] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:57.542 [2024-07-26 01:00:27.884212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:57.542 [2024-07-26 01:00:27.884224] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:57.542 [2024-07-26 01:00:27.884232] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:57.542 [2024-07-26 01:00:27.884238] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:57.542 [2024-07-26 01:00:27.884246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:57.542 [2024-07-26 01:00:27.892072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:57.542 [2024-07-26 01:00:27.892109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:57.542 [2024-07-26 01:00:27.892127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:57.542 [2024-07-26 01:00:27.892138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:57.542 ===================================================== 00:17:57.542 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:57.542 ===================================================== 00:17:57.542 Controller Capabilities/Features 00:17:57.542 ================================ 00:17:57.542 Vendor ID: 4e58 00:17:57.542 Subsystem Vendor ID: 4e58 00:17:57.542 Serial Number: SPDK2 00:17:57.542 Model Number: SPDK bdev Controller 00:17:57.542 Firmware Version: 24.09 00:17:57.542 Recommended Arb Burst: 6 00:17:57.542 IEEE OUI Identifier: 8d 6b 50 00:17:57.542 Multi-path I/O 00:17:57.542 May have multiple subsystem ports: Yes 00:17:57.542 May have multiple controllers: Yes 00:17:57.542 Associated with SR-IOV VF: No 00:17:57.542 Max Data Transfer Size: 131072 00:17:57.542 Max Number of Namespaces: 32 00:17:57.542 Max Number of I/O Queues: 127 00:17:57.543 NVMe Specification Version (VS): 1.3 00:17:57.543 NVMe Specification Version (Identify): 1.3 00:17:57.543 Maximum Queue Entries: 256 00:17:57.543 Contiguous Queues Required: Yes 00:17:57.543 Arbitration Mechanisms Supported 00:17:57.543 Weighted Round Robin: Not Supported 00:17:57.543 Vendor Specific: Not Supported 00:17:57.543 Reset Timeout: 15000 ms 00:17:57.543 Doorbell Stride: 4 bytes 00:17:57.543 NVM Subsystem Reset: Not Supported 00:17:57.543 Command Sets Supported 00:17:57.543 NVM Command Set: Supported 00:17:57.543 Boot Partition: Not Supported 00:17:57.543 Memory Page Size Minimum: 4096 bytes 00:17:57.543 Memory Page Size Maximum: 4096 bytes 00:17:57.543 Persistent Memory Region: Not Supported 00:17:57.543 Optional Asynchronous Events Supported 00:17:57.543 Namespace Attribute Notices: Supported 00:17:57.543 Firmware Activation Notices: Not Supported 00:17:57.543 ANA Change Notices: Not Supported 00:17:57.543 PLE Aggregate Log Change Notices: Not Supported 00:17:57.543 LBA Status Info Alert Notices: Not Supported 00:17:57.543 EGE Aggregate Log Change Notices: Not Supported 00:17:57.543 Normal NVM Subsystem Shutdown event: Not Supported 00:17:57.543 Zone Descriptor Change Notices: Not Supported 00:17:57.543 Discovery Log Change Notices: Not Supported 00:17:57.543 Controller Attributes 00:17:57.543 128-bit Host Identifier: Supported 00:17:57.543 Non-Operational Permissive Mode: Not Supported 00:17:57.543 NVM Sets: Not Supported 00:17:57.543 Read Recovery Levels: Not Supported 00:17:57.543 Endurance Groups: Not Supported 00:17:57.543 Predictable Latency Mode: Not Supported 00:17:57.543 Traffic Based Keep ALive: Not Supported 00:17:57.543 Namespace Granularity: Not Supported 00:17:57.543 SQ Associations: Not Supported 00:17:57.543 UUID List: Not Supported 00:17:57.543 Multi-Domain Subsystem: Not Supported 00:17:57.543 Fixed Capacity Management: Not Supported 00:17:57.543 Variable Capacity Management: Not Supported 00:17:57.543 Delete Endurance Group: Not Supported 00:17:57.543 Delete NVM Set: Not Supported 00:17:57.543 Extended LBA Formats Supported: Not Supported 00:17:57.543 Flexible Data Placement Supported: Not Supported 00:17:57.543 00:17:57.543 Controller Memory Buffer Support 00:17:57.543 ================================ 00:17:57.543 Supported: No 00:17:57.543 00:17:57.543 Persistent Memory Region Support 00:17:57.543 ================================ 00:17:57.543 Supported: No 00:17:57.543 00:17:57.543 Admin Command Set Attributes 00:17:57.543 ============================ 00:17:57.543 Security Send/Receive: Not Supported 00:17:57.543 Format NVM: Not Supported 00:17:57.543 Firmware Activate/Download: Not Supported 00:17:57.543 Namespace Management: Not Supported 00:17:57.543 Device Self-Test: Not Supported 00:17:57.543 Directives: Not Supported 00:17:57.543 NVMe-MI: Not Supported 00:17:57.543 Virtualization Management: Not Supported 00:17:57.543 Doorbell Buffer Config: Not Supported 00:17:57.543 Get LBA Status Capability: Not Supported 00:17:57.543 Command & Feature Lockdown Capability: Not Supported 00:17:57.543 Abort Command Limit: 4 00:17:57.543 Async Event Request Limit: 4 00:17:57.543 Number of Firmware Slots: N/A 00:17:57.543 Firmware Slot 1 Read-Only: N/A 00:17:57.543 Firmware Activation Without Reset: N/A 00:17:57.543 Multiple Update Detection Support: N/A 00:17:57.543 Firmware Update Granularity: No Information Provided 00:17:57.543 Per-Namespace SMART Log: No 00:17:57.543 Asymmetric Namespace Access Log Page: Not Supported 00:17:57.543 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:17:57.543 Command Effects Log Page: Supported 00:17:57.543 Get Log Page Extended Data: Supported 00:17:57.543 Telemetry Log Pages: Not Supported 00:17:57.543 Persistent Event Log Pages: Not Supported 00:17:57.543 Supported Log Pages Log Page: May Support 00:17:57.543 Commands Supported & Effects Log Page: Not Supported 00:17:57.543 Feature Identifiers & Effects Log Page:May Support 00:17:57.543 NVMe-MI Commands & Effects Log Page: May Support 00:17:57.543 Data Area 4 for Telemetry Log: Not Supported 00:17:57.543 Error Log Page Entries Supported: 128 00:17:57.543 Keep Alive: Supported 00:17:57.543 Keep Alive Granularity: 10000 ms 00:17:57.543 00:17:57.543 NVM Command Set Attributes 00:17:57.543 ========================== 00:17:57.543 Submission Queue Entry Size 00:17:57.543 Max: 64 00:17:57.543 Min: 64 00:17:57.543 Completion Queue Entry Size 00:17:57.543 Max: 16 00:17:57.543 Min: 16 00:17:57.543 Number of Namespaces: 32 00:17:57.543 Compare Command: Supported 00:17:57.543 Write Uncorrectable Command: Not Supported 00:17:57.543 Dataset Management Command: Supported 00:17:57.543 Write Zeroes Command: Supported 00:17:57.543 Set Features Save Field: Not Supported 00:17:57.543 Reservations: Not Supported 00:17:57.543 Timestamp: Not Supported 00:17:57.543 Copy: Supported 00:17:57.543 Volatile Write Cache: Present 00:17:57.543 Atomic Write Unit (Normal): 1 00:17:57.543 Atomic Write Unit (PFail): 1 00:17:57.543 Atomic Compare & Write Unit: 1 00:17:57.543 Fused Compare & Write: Supported 00:17:57.543 Scatter-Gather List 00:17:57.543 SGL Command Set: Supported (Dword aligned) 00:17:57.543 SGL Keyed: Not Supported 00:17:57.543 SGL Bit Bucket Descriptor: Not Supported 00:17:57.543 SGL Metadata Pointer: Not Supported 00:17:57.543 Oversized SGL: Not Supported 00:17:57.543 SGL Metadata Address: Not Supported 00:17:57.543 SGL Offset: Not Supported 00:17:57.543 Transport SGL Data Block: Not Supported 00:17:57.543 Replay Protected Memory Block: Not Supported 00:17:57.543 00:17:57.543 Firmware Slot Information 00:17:57.543 ========================= 00:17:57.543 Active slot: 1 00:17:57.543 Slot 1 Firmware Revision: 24.09 00:17:57.543 00:17:57.543 00:17:57.543 Commands Supported and Effects 00:17:57.543 ============================== 00:17:57.543 Admin Commands 00:17:57.543 -------------- 00:17:57.543 Get Log Page (02h): Supported 00:17:57.543 Identify (06h): Supported 00:17:57.543 Abort (08h): Supported 00:17:57.543 Set Features (09h): Supported 00:17:57.543 Get Features (0Ah): Supported 00:17:57.543 Asynchronous Event Request (0Ch): Supported 00:17:57.543 Keep Alive (18h): Supported 00:17:57.543 I/O Commands 00:17:57.543 ------------ 00:17:57.543 Flush (00h): Supported LBA-Change 00:17:57.543 Write (01h): Supported LBA-Change 00:17:57.543 Read (02h): Supported 00:17:57.543 Compare (05h): Supported 00:17:57.543 Write Zeroes (08h): Supported LBA-Change 00:17:57.543 Dataset Management (09h): Supported LBA-Change 00:17:57.543 Copy (19h): Supported LBA-Change 00:17:57.543 00:17:57.543 Error Log 00:17:57.543 ========= 00:17:57.543 00:17:57.543 Arbitration 00:17:57.543 =========== 00:17:57.543 Arbitration Burst: 1 00:17:57.543 00:17:57.543 Power Management 00:17:57.543 ================ 00:17:57.543 Number of Power States: 1 00:17:57.543 Current Power State: Power State #0 00:17:57.543 Power State #0: 00:17:57.543 Max Power: 0.00 W 00:17:57.543 Non-Operational State: Operational 00:17:57.543 Entry Latency: Not Reported 00:17:57.543 Exit Latency: Not Reported 00:17:57.543 Relative Read Throughput: 0 00:17:57.543 Relative Read Latency: 0 00:17:57.543 Relative Write Throughput: 0 00:17:57.543 Relative Write Latency: 0 00:17:57.543 Idle Power: Not Reported 00:17:57.543 Active Power: Not Reported 00:17:57.543 Non-Operational Permissive Mode: Not Supported 00:17:57.543 00:17:57.543 Health Information 00:17:57.543 ================== 00:17:57.543 Critical Warnings: 00:17:57.543 Available Spare Space: OK 00:17:57.543 Temperature: OK 00:17:57.543 Device Reliability: OK 00:17:57.543 Read Only: No 00:17:57.543 Volatile Memory Backup: OK 00:17:57.543 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:57.543 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:57.543 Available Spare: 0% 00:17:57.543 Available Sp[2024-07-26 01:00:27.892258] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:57.543 [2024-07-26 01:00:27.900073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:57.543 [2024-07-26 01:00:27.900124] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:17:57.543 [2024-07-26 01:00:27.900141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.543 [2024-07-26 01:00:27.900152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.543 [2024-07-26 01:00:27.900162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.544 [2024-07-26 01:00:27.900172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.544 [2024-07-26 01:00:27.900236] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:57.544 [2024-07-26 01:00:27.900256] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:17:57.544 [2024-07-26 01:00:27.901240] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:57.544 [2024-07-26 01:00:27.901309] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:17:57.544 [2024-07-26 01:00:27.901324] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:17:57.544 [2024-07-26 01:00:27.902246] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:17:57.544 [2024-07-26 01:00:27.902275] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:17:57.544 [2024-07-26 01:00:27.902327] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:17:57.544 [2024-07-26 01:00:27.903549] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:57.544 are Threshold: 0% 00:17:57.544 Life Percentage Used: 0% 00:17:57.544 Data Units Read: 0 00:17:57.544 Data Units Written: 0 00:17:57.544 Host Read Commands: 0 00:17:57.544 Host Write Commands: 0 00:17:57.544 Controller Busy Time: 0 minutes 00:17:57.544 Power Cycles: 0 00:17:57.544 Power On Hours: 0 hours 00:17:57.544 Unsafe Shutdowns: 0 00:17:57.544 Unrecoverable Media Errors: 0 00:17:57.544 Lifetime Error Log Entries: 0 00:17:57.544 Warning Temperature Time: 0 minutes 00:17:57.544 Critical Temperature Time: 0 minutes 00:17:57.544 00:17:57.544 Number of Queues 00:17:57.544 ================ 00:17:57.544 Number of I/O Submission Queues: 127 00:17:57.544 Number of I/O Completion Queues: 127 00:17:57.544 00:17:57.544 Active Namespaces 00:17:57.544 ================= 00:17:57.544 Namespace ID:1 00:17:57.544 Error Recovery Timeout: Unlimited 00:17:57.544 Command Set Identifier: NVM (00h) 00:17:57.544 Deallocate: Supported 00:17:57.544 Deallocated/Unwritten Error: Not Supported 00:17:57.544 Deallocated Read Value: Unknown 00:17:57.544 Deallocate in Write Zeroes: Not Supported 00:17:57.544 Deallocated Guard Field: 0xFFFF 00:17:57.544 Flush: Supported 00:17:57.544 Reservation: Supported 00:17:57.544 Namespace Sharing Capabilities: Multiple Controllers 00:17:57.544 Size (in LBAs): 131072 (0GiB) 00:17:57.544 Capacity (in LBAs): 131072 (0GiB) 00:17:57.544 Utilization (in LBAs): 131072 (0GiB) 00:17:57.544 NGUID: 19926401784D4BB28F8C35CACEE4BAF3 00:17:57.544 UUID: 19926401-784d-4bb2-8f8c-35cacee4baf3 00:17:57.544 Thin Provisioning: Not Supported 00:17:57.544 Per-NS Atomic Units: Yes 00:17:57.544 Atomic Boundary Size (Normal): 0 00:17:57.544 Atomic Boundary Size (PFail): 0 00:17:57.544 Atomic Boundary Offset: 0 00:17:57.544 Maximum Single Source Range Length: 65535 00:17:57.544 Maximum Copy Length: 65535 00:17:57.544 Maximum Source Range Count: 1 00:17:57.544 NGUID/EUI64 Never Reused: No 00:17:57.544 Namespace Write Protected: No 00:17:57.544 Number of LBA Formats: 1 00:17:57.544 Current LBA Format: LBA Format #00 00:17:57.544 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:57.544 00:17:57.544 01:00:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:57.802 EAL: No free 2048 kB hugepages reported on node 1 00:17:57.802 [2024-07-26 01:00:28.133881] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:03.106 Initializing NVMe Controllers 00:18:03.106 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:03.106 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:03.106 Initialization complete. Launching workers. 00:18:03.106 ======================================================== 00:18:03.106 Latency(us) 00:18:03.106 Device Information : IOPS MiB/s Average min max 00:18:03.106 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34584.26 135.09 3700.59 1154.85 8630.91 00:18:03.106 ======================================================== 00:18:03.106 Total : 34584.26 135.09 3700.59 1154.85 8630.91 00:18:03.106 00:18:03.106 [2024-07-26 01:00:33.239452] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:03.106 01:00:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:03.106 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.106 [2024-07-26 01:00:33.473076] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:08.380 Initializing NVMe Controllers 00:18:08.380 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:08.380 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:08.380 Initialization complete. Launching workers. 00:18:08.380 ======================================================== 00:18:08.380 Latency(us) 00:18:08.380 Device Information : IOPS MiB/s Average min max 00:18:08.380 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31899.72 124.61 4011.85 1192.77 7844.77 00:18:08.380 ======================================================== 00:18:08.380 Total : 31899.72 124.61 4011.85 1192.77 7844.77 00:18:08.380 00:18:08.380 [2024-07-26 01:00:38.494330] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:08.380 01:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:08.380 EAL: No free 2048 kB hugepages reported on node 1 00:18:08.380 [2024-07-26 01:00:38.708256] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:13.654 [2024-07-26 01:00:43.852213] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:13.654 Initializing NVMe Controllers 00:18:13.654 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:13.654 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:13.654 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:18:13.654 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:18:13.654 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:18:13.654 Initialization complete. Launching workers. 00:18:13.654 Starting thread on core 2 00:18:13.654 Starting thread on core 3 00:18:13.654 Starting thread on core 1 00:18:13.654 01:00:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:18:13.654 EAL: No free 2048 kB hugepages reported on node 1 00:18:13.913 [2024-07-26 01:00:44.151562] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:17.201 [2024-07-26 01:00:47.209964] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:17.201 Initializing NVMe Controllers 00:18:17.201 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:17.201 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:17.201 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:18:17.201 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:18:17.201 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:18:17.201 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:18:17.201 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:17.201 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:17.201 Initialization complete. Launching workers. 00:18:17.201 Starting thread on core 1 with urgent priority queue 00:18:17.201 Starting thread on core 2 with urgent priority queue 00:18:17.201 Starting thread on core 3 with urgent priority queue 00:18:17.201 Starting thread on core 0 with urgent priority queue 00:18:17.201 SPDK bdev Controller (SPDK2 ) core 0: 4634.67 IO/s 21.58 secs/100000 ios 00:18:17.201 SPDK bdev Controller (SPDK2 ) core 1: 6200.67 IO/s 16.13 secs/100000 ios 00:18:17.201 SPDK bdev Controller (SPDK2 ) core 2: 5580.00 IO/s 17.92 secs/100000 ios 00:18:17.201 SPDK bdev Controller (SPDK2 ) core 3: 6102.00 IO/s 16.39 secs/100000 ios 00:18:17.201 ======================================================== 00:18:17.201 00:18:17.201 01:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:17.201 EAL: No free 2048 kB hugepages reported on node 1 00:18:17.201 [2024-07-26 01:00:47.501729] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:17.201 Initializing NVMe Controllers 00:18:17.201 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:17.201 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:17.201 Namespace ID: 1 size: 0GB 00:18:17.201 Initialization complete. 00:18:17.201 INFO: using host memory buffer for IO 00:18:17.201 Hello world! 00:18:17.201 [2024-07-26 01:00:47.510790] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:17.201 01:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:17.201 EAL: No free 2048 kB hugepages reported on node 1 00:18:17.461 [2024-07-26 01:00:47.802428] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:18.836 Initializing NVMe Controllers 00:18:18.836 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:18.836 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:18.836 Initialization complete. Launching workers. 00:18:18.836 submit (in ns) avg, min, max = 7806.0, 3516.7, 4015613.3 00:18:18.836 complete (in ns) avg, min, max = 26003.3, 2062.2, 4016155.6 00:18:18.837 00:18:18.837 Submit histogram 00:18:18.837 ================ 00:18:18.837 Range in us Cumulative Count 00:18:18.837 3.508 - 3.532: 0.3572% ( 48) 00:18:18.837 3.532 - 3.556: 1.6001% ( 167) 00:18:18.837 3.556 - 3.579: 3.7806% ( 293) 00:18:18.837 3.579 - 3.603: 9.6152% ( 784) 00:18:18.837 3.603 - 3.627: 17.4593% ( 1054) 00:18:18.837 3.627 - 3.650: 26.8289% ( 1259) 00:18:18.837 3.650 - 3.674: 34.0552% ( 971) 00:18:18.837 3.674 - 3.698: 40.7755% ( 903) 00:18:18.837 3.698 - 3.721: 47.3394% ( 882) 00:18:18.837 3.721 - 3.745: 53.5015% ( 828) 00:18:18.837 3.745 - 3.769: 58.3240% ( 648) 00:18:18.837 3.769 - 3.793: 62.1046% ( 508) 00:18:18.837 3.793 - 3.816: 65.0815% ( 400) 00:18:18.837 3.816 - 3.840: 68.4007% ( 446) 00:18:18.837 3.840 - 3.864: 72.9553% ( 612) 00:18:18.837 3.864 - 3.887: 76.9815% ( 541) 00:18:18.837 3.887 - 3.911: 80.6504% ( 493) 00:18:18.837 3.911 - 3.935: 83.9548% ( 444) 00:18:18.837 3.935 - 3.959: 86.5372% ( 347) 00:18:18.837 3.959 - 3.982: 88.4945% ( 263) 00:18:18.837 3.982 - 4.006: 90.0796% ( 213) 00:18:18.837 4.006 - 4.030: 91.4043% ( 178) 00:18:18.837 4.030 - 4.053: 92.5579% ( 155) 00:18:18.837 4.053 - 4.077: 93.4584% ( 121) 00:18:18.837 4.077 - 4.101: 94.2993% ( 113) 00:18:18.837 4.101 - 4.124: 94.9468% ( 87) 00:18:18.837 4.124 - 4.148: 95.5719% ( 84) 00:18:18.837 4.148 - 4.172: 96.0482% ( 64) 00:18:18.837 4.172 - 4.196: 96.3608% ( 42) 00:18:18.837 4.196 - 4.219: 96.5989% ( 32) 00:18:18.837 4.219 - 4.243: 96.7701% ( 23) 00:18:18.837 4.243 - 4.267: 96.9115% ( 19) 00:18:18.837 4.267 - 4.290: 97.0752% ( 22) 00:18:18.837 4.290 - 4.314: 97.1720% ( 13) 00:18:18.837 4.314 - 4.338: 97.2762% ( 14) 00:18:18.837 4.338 - 4.361: 97.3432% ( 9) 00:18:18.837 4.361 - 4.385: 97.4176% ( 10) 00:18:18.837 4.385 - 4.409: 97.4325% ( 2) 00:18:18.837 4.409 - 4.433: 97.4622% ( 4) 00:18:18.837 4.433 - 4.456: 97.4771% ( 2) 00:18:18.837 4.456 - 4.480: 97.4920% ( 2) 00:18:18.837 4.480 - 4.504: 97.5143% ( 3) 00:18:18.837 4.504 - 4.527: 97.5218% ( 1) 00:18:18.837 4.551 - 4.575: 97.5367% ( 2) 00:18:18.837 4.575 - 4.599: 97.5441% ( 1) 00:18:18.837 4.599 - 4.622: 97.5515% ( 1) 00:18:18.837 4.622 - 4.646: 97.5590% ( 1) 00:18:18.837 4.646 - 4.670: 97.5739% ( 2) 00:18:18.837 4.670 - 4.693: 97.5962% ( 3) 00:18:18.837 4.693 - 4.717: 97.6111% ( 2) 00:18:18.837 4.717 - 4.741: 97.6557% ( 6) 00:18:18.837 4.741 - 4.764: 97.7227% ( 9) 00:18:18.837 4.764 - 4.788: 97.7525% ( 4) 00:18:18.837 4.788 - 4.812: 97.7971% ( 6) 00:18:18.837 4.812 - 4.836: 97.8492% ( 7) 00:18:18.837 4.836 - 4.859: 97.8790% ( 4) 00:18:18.837 4.859 - 4.883: 97.9088% ( 4) 00:18:18.837 4.883 - 4.907: 97.9609% ( 7) 00:18:18.837 4.907 - 4.930: 97.9683% ( 1) 00:18:18.837 4.930 - 4.954: 97.9981% ( 4) 00:18:18.837 4.954 - 4.978: 98.0278% ( 4) 00:18:18.837 4.978 - 5.001: 98.0725% ( 6) 00:18:18.837 5.001 - 5.025: 98.0874% ( 2) 00:18:18.837 5.025 - 5.049: 98.1023% ( 2) 00:18:18.837 5.049 - 5.073: 98.1246% ( 3) 00:18:18.837 5.073 - 5.096: 98.1320% ( 1) 00:18:18.837 5.096 - 5.120: 98.1543% ( 3) 00:18:18.837 5.144 - 5.167: 98.1692% ( 2) 00:18:18.837 5.167 - 5.191: 98.2064% ( 5) 00:18:18.837 5.191 - 5.215: 98.2437% ( 5) 00:18:18.837 5.262 - 5.286: 98.2585% ( 2) 00:18:18.837 5.286 - 5.310: 98.2809% ( 3) 00:18:18.837 5.357 - 5.381: 98.2883% ( 1) 00:18:18.837 5.381 - 5.404: 98.2958% ( 1) 00:18:18.837 5.404 - 5.428: 98.3106% ( 2) 00:18:18.837 5.428 - 5.452: 98.3181% ( 1) 00:18:18.837 5.523 - 5.547: 98.3255% ( 1) 00:18:18.837 5.547 - 5.570: 98.3330% ( 1) 00:18:18.837 5.594 - 5.618: 98.3404% ( 1) 00:18:18.837 5.831 - 5.855: 98.3478% ( 1) 00:18:18.837 5.855 - 5.879: 98.3553% ( 1) 00:18:18.837 6.068 - 6.116: 98.3627% ( 1) 00:18:18.837 6.163 - 6.210: 98.3776% ( 2) 00:18:18.837 6.258 - 6.305: 98.3925% ( 2) 00:18:18.837 6.353 - 6.400: 98.4074% ( 2) 00:18:18.837 6.400 - 6.447: 98.4223% ( 2) 00:18:18.837 6.495 - 6.542: 98.4297% ( 1) 00:18:18.837 6.542 - 6.590: 98.4372% ( 1) 00:18:18.837 6.684 - 6.732: 98.4446% ( 1) 00:18:18.837 6.827 - 6.874: 98.4520% ( 1) 00:18:18.837 6.874 - 6.921: 98.4595% ( 1) 00:18:18.837 6.921 - 6.969: 98.4744% ( 2) 00:18:18.837 6.969 - 7.016: 98.4818% ( 1) 00:18:18.837 7.016 - 7.064: 98.4967% ( 2) 00:18:18.837 7.111 - 7.159: 98.5116% ( 2) 00:18:18.837 7.159 - 7.206: 98.5190% ( 1) 00:18:18.837 7.253 - 7.301: 98.5265% ( 1) 00:18:18.837 7.301 - 7.348: 98.5339% ( 1) 00:18:18.837 7.348 - 7.396: 98.5413% ( 1) 00:18:18.837 7.443 - 7.490: 98.5488% ( 1) 00:18:18.837 7.490 - 7.538: 98.5637% ( 2) 00:18:18.837 7.585 - 7.633: 98.5711% ( 1) 00:18:18.837 7.680 - 7.727: 98.5860% ( 2) 00:18:18.837 7.822 - 7.870: 98.5934% ( 1) 00:18:18.837 7.870 - 7.917: 98.6083% ( 2) 00:18:18.837 7.917 - 7.964: 98.6232% ( 2) 00:18:18.837 7.964 - 8.012: 98.6381% ( 2) 00:18:18.837 8.012 - 8.059: 98.6455% ( 1) 00:18:18.837 8.059 - 8.107: 98.6530% ( 1) 00:18:18.837 8.107 - 8.154: 98.6604% ( 1) 00:18:18.837 8.154 - 8.201: 98.6753% ( 2) 00:18:18.837 8.296 - 8.344: 98.6976% ( 3) 00:18:18.837 8.344 - 8.391: 98.7051% ( 1) 00:18:18.837 8.391 - 8.439: 98.7125% ( 1) 00:18:18.837 8.581 - 8.628: 98.7200% ( 1) 00:18:18.837 8.865 - 8.913: 98.7348% ( 2) 00:18:18.837 8.913 - 8.960: 98.7423% ( 1) 00:18:18.837 8.960 - 9.007: 98.7497% ( 1) 00:18:18.837 9.007 - 9.055: 98.7572% ( 1) 00:18:18.837 9.387 - 9.434: 98.7646% ( 1) 00:18:18.837 9.481 - 9.529: 98.7720% ( 1) 00:18:18.837 9.529 - 9.576: 98.7869% ( 2) 00:18:18.837 9.624 - 9.671: 98.7944% ( 1) 00:18:18.837 9.671 - 9.719: 98.8018% ( 1) 00:18:18.837 9.719 - 9.766: 98.8093% ( 1) 00:18:18.837 9.766 - 9.813: 98.8167% ( 1) 00:18:18.837 9.908 - 9.956: 98.8316% ( 2) 00:18:18.837 9.956 - 10.003: 98.8390% ( 1) 00:18:18.837 10.050 - 10.098: 98.8465% ( 1) 00:18:18.837 10.145 - 10.193: 98.8614% ( 2) 00:18:18.837 10.382 - 10.430: 98.8688% ( 1) 00:18:18.837 10.430 - 10.477: 98.8762% ( 1) 00:18:18.837 10.809 - 10.856: 98.8911% ( 2) 00:18:18.837 10.856 - 10.904: 98.8986% ( 1) 00:18:18.837 11.425 - 11.473: 98.9060% ( 1) 00:18:18.837 11.473 - 11.520: 98.9134% ( 1) 00:18:18.837 11.520 - 11.567: 98.9283% ( 2) 00:18:18.837 11.662 - 11.710: 98.9358% ( 1) 00:18:18.837 11.710 - 11.757: 98.9432% ( 1) 00:18:18.837 11.852 - 11.899: 98.9507% ( 1) 00:18:18.837 11.899 - 11.947: 98.9581% ( 1) 00:18:18.837 12.089 - 12.136: 98.9655% ( 1) 00:18:18.837 12.326 - 12.421: 98.9804% ( 2) 00:18:18.837 12.516 - 12.610: 98.9953% ( 2) 00:18:18.837 13.179 - 13.274: 99.0028% ( 1) 00:18:18.837 13.369 - 13.464: 99.0102% ( 1) 00:18:18.837 13.559 - 13.653: 99.0176% ( 1) 00:18:18.837 13.653 - 13.748: 99.0251% ( 1) 00:18:18.837 13.748 - 13.843: 99.0325% ( 1) 00:18:18.837 13.938 - 14.033: 99.0400% ( 1) 00:18:18.837 14.033 - 14.127: 99.0474% ( 1) 00:18:18.837 14.222 - 14.317: 99.0623% ( 2) 00:18:18.837 14.317 - 14.412: 99.0697% ( 1) 00:18:18.837 14.791 - 14.886: 99.0772% ( 1) 00:18:18.837 15.265 - 15.360: 99.0846% ( 1) 00:18:18.837 16.972 - 17.067: 99.0921% ( 1) 00:18:18.837 17.161 - 17.256: 99.0995% ( 1) 00:18:18.837 17.351 - 17.446: 99.1144% ( 2) 00:18:18.837 17.446 - 17.541: 99.1218% ( 1) 00:18:18.837 17.541 - 17.636: 99.1367% ( 2) 00:18:18.837 17.636 - 17.730: 99.1442% ( 1) 00:18:18.837 17.730 - 17.825: 99.1814% ( 5) 00:18:18.837 17.825 - 17.920: 99.2260% ( 6) 00:18:18.837 17.920 - 18.015: 99.2707% ( 6) 00:18:18.837 18.015 - 18.110: 99.3228% ( 7) 00:18:18.837 18.110 - 18.204: 99.3972% ( 10) 00:18:18.837 18.204 - 18.299: 99.4865% ( 12) 00:18:18.837 18.299 - 18.394: 99.5386% ( 7) 00:18:18.837 18.394 - 18.489: 99.5981% ( 8) 00:18:18.837 18.489 - 18.584: 99.6279% ( 4) 00:18:18.837 18.584 - 18.679: 99.6949% ( 9) 00:18:18.837 18.679 - 18.773: 99.7321% ( 5) 00:18:18.837 18.773 - 18.868: 99.7693% ( 5) 00:18:18.837 18.868 - 18.963: 99.7916% ( 3) 00:18:18.837 19.058 - 19.153: 99.8065% ( 2) 00:18:18.837 19.247 - 19.342: 99.8214% ( 2) 00:18:18.837 19.342 - 19.437: 99.8512% ( 4) 00:18:18.837 19.437 - 19.532: 99.8586% ( 1) 00:18:18.838 19.532 - 19.627: 99.8660% ( 1) 00:18:18.838 19.627 - 19.721: 99.8735% ( 1) 00:18:18.838 20.196 - 20.290: 99.8809% ( 1) 00:18:18.838 21.713 - 21.807: 99.8884% ( 1) 00:18:18.838 22.471 - 22.566: 99.8958% ( 1) 00:18:18.838 41.529 - 41.719: 99.9033% ( 1) 00:18:18.838 3980.705 - 4004.978: 99.9926% ( 12) 00:18:18.838 4004.978 - 4029.250: 100.0000% ( 1) 00:18:18.838 00:18:18.838 Complete histogram 00:18:18.838 ================== 00:18:18.838 Range in us Cumulative Count 00:18:18.838 2.062 - 2.074: 7.3454% ( 987) 00:18:18.838 2.074 - 2.086: 46.4315% ( 5252) 00:18:18.838 2.086 - 2.098: 50.6586% ( 568) 00:18:18.838 2.098 - 2.110: 55.4439% ( 643) 00:18:18.838 2.110 - 2.121: 61.9930% ( 880) 00:18:18.838 2.121 - 2.133: 63.3921% ( 188) 00:18:18.838 2.133 - 2.145: 69.5170% ( 823) 00:18:18.838 2.145 - 2.157: 76.8698% ( 988) 00:18:18.838 2.157 - 2.169: 77.5471% ( 91) 00:18:18.838 2.169 - 2.181: 80.4644% ( 392) 00:18:18.838 2.181 - 2.193: 83.0170% ( 343) 00:18:18.838 2.193 - 2.204: 83.4785% ( 62) 00:18:18.838 2.204 - 2.216: 85.6218% ( 288) 00:18:18.838 2.216 - 2.228: 89.5587% ( 529) 00:18:18.838 2.228 - 2.240: 91.6574% ( 282) 00:18:18.838 2.240 - 2.252: 92.9076% ( 168) 00:18:18.838 2.252 - 2.264: 94.0165% ( 149) 00:18:18.838 2.264 - 2.276: 94.2249% ( 28) 00:18:18.838 2.276 - 2.287: 94.4184% ( 26) 00:18:18.838 2.287 - 2.299: 94.7682% ( 47) 00:18:18.838 2.299 - 2.311: 95.3710% ( 81) 00:18:18.838 2.311 - 2.323: 95.6091% ( 32) 00:18:18.838 2.323 - 2.335: 95.7357% ( 17) 00:18:18.838 2.335 - 2.347: 95.9366% ( 27) 00:18:18.838 2.347 - 2.359: 96.1301% ( 26) 00:18:18.838 2.359 - 2.370: 96.4799% ( 47) 00:18:18.838 2.370 - 2.382: 96.8073% ( 44) 00:18:18.838 2.382 - 2.394: 97.4622% ( 88) 00:18:18.838 2.394 - 2.406: 97.7674% ( 41) 00:18:18.838 2.406 - 2.418: 97.8641% ( 13) 00:18:18.838 2.418 - 2.430: 97.9757% ( 15) 00:18:18.838 2.430 - 2.441: 98.0725% ( 13) 00:18:18.838 2.441 - 2.453: 98.1618% ( 12) 00:18:18.838 2.453 - 2.465: 98.2213% ( 8) 00:18:18.838 2.465 - 2.477: 98.2958% ( 10) 00:18:18.838 2.477 - 2.489: 98.3627% ( 9) 00:18:18.838 2.489 - 2.501: 98.4074% ( 6) 00:18:18.838 2.501 - 2.513: 98.4297% ( 3) 00:18:18.838 2.513 - 2.524: 98.4446% ( 2) 00:18:18.838 2.536 - 2.548: 98.4595% ( 2) 00:18:18.838 2.548 - 2.560: 98.4744% ( 2) 00:18:18.838 2.572 - 2.584: 98.4892% ( 2) 00:18:18.838 2.596 - 2.607: 98.4967% ( 1) 00:18:18.838 2.619 - 2.631: 98.5116% ( 2) 00:18:18.838 2.631 - 2.643: 98.5265% ( 2) 00:18:18.838 2.643 - 2.655: 98.5339% ( 1) 00:18:18.838 2.714 - 2.726: 98.5413% ( 1) 00:18:18.838 2.726 - 2.738: 98.5562% ( 2) 00:18:18.838 2.761 - 2.773: 98.5711% ( 2) 00:18:18.838 2.809 - 2.821: 98.5860% ( 2) 00:18:18.838 3.247 - 3.271: 98.5934% ( 1) 00:18:18.838 3.271 - 3.295: 98.6009% ( 1) 00:18:18.838 3.319 - 3.342: 98.6158% ( 2) 00:18:18.838 3.342 - 3.366: 98.6232% ( 1) 00:18:18.838 3.366 - 3.390: 98.6306% ( 1) 00:18:18.838 3.390 - 3.413: 98.6381% ( 1) 00:18:18.838 3.413 - 3.437: 98.6604% ( 3) 00:18:18.838 3.437 - 3.461: 98.6679% ( 1) 00:18:18.838 3.461 - 3.484: 98.6753% ( 1) 00:18:18.838 3.508 - 3.532: 98.6827% ( 1) 00:18:18.838 3.532 - 3.556: 98.6902% ( 1) 00:18:18.838 3.579 - 3.603: 98.7051% ( 2) 00:18:18.838 3.627 - 3.650: 98.7125% ( 1) 00:18:18.838 3.650 - 3.674: 98.7200% ( 1) 00:18:18.838 3.721 - 3.745: 98.7274% ( 1) 00:18:18.838 3.745 - 3.769: 9[2024-07-26 01:00:48.896818] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:18.838 8.7348% ( 1) 00:18:18.838 3.816 - 3.840: 98.7423% ( 1) 00:18:18.838 3.840 - 3.864: 98.7572% ( 2) 00:18:18.838 4.053 - 4.077: 98.7646% ( 1) 00:18:18.838 4.622 - 4.646: 98.7720% ( 1) 00:18:18.838 4.717 - 4.741: 98.7795% ( 1) 00:18:18.838 5.262 - 5.286: 98.7869% ( 1) 00:18:18.838 5.404 - 5.428: 98.7944% ( 1) 00:18:18.838 5.452 - 5.476: 98.8018% ( 1) 00:18:18.838 5.689 - 5.713: 98.8093% ( 1) 00:18:18.838 5.902 - 5.926: 98.8167% ( 1) 00:18:18.838 5.997 - 6.021: 98.8241% ( 1) 00:18:18.838 6.116 - 6.163: 98.8316% ( 1) 00:18:18.838 6.163 - 6.210: 98.8390% ( 1) 00:18:18.838 6.258 - 6.305: 98.8465% ( 1) 00:18:18.838 6.353 - 6.400: 98.8614% ( 2) 00:18:18.838 6.495 - 6.542: 98.8688% ( 1) 00:18:18.838 6.637 - 6.684: 98.8762% ( 1) 00:18:18.838 6.969 - 7.016: 98.8837% ( 1) 00:18:18.838 7.870 - 7.917: 98.8911% ( 1) 00:18:18.838 8.059 - 8.107: 98.8986% ( 1) 00:18:18.838 8.628 - 8.676: 98.9060% ( 1) 00:18:18.838 8.865 - 8.913: 98.9134% ( 1) 00:18:18.838 9.719 - 9.766: 98.9209% ( 1) 00:18:18.838 15.550 - 15.644: 98.9283% ( 1) 00:18:18.838 15.644 - 15.739: 98.9432% ( 2) 00:18:18.838 15.739 - 15.834: 98.9655% ( 3) 00:18:18.838 15.834 - 15.929: 98.9730% ( 1) 00:18:18.838 15.929 - 16.024: 99.0102% ( 5) 00:18:18.838 16.024 - 16.119: 99.0251% ( 2) 00:18:18.838 16.119 - 16.213: 99.0400% ( 2) 00:18:18.838 16.213 - 16.308: 99.0548% ( 2) 00:18:18.838 16.308 - 16.403: 99.0697% ( 2) 00:18:18.838 16.403 - 16.498: 99.0921% ( 3) 00:18:18.838 16.498 - 16.593: 99.1367% ( 6) 00:18:18.838 16.593 - 16.687: 99.1665% ( 4) 00:18:18.838 16.687 - 16.782: 99.2037% ( 5) 00:18:18.838 16.782 - 16.877: 99.2111% ( 1) 00:18:18.838 16.877 - 16.972: 99.2483% ( 5) 00:18:18.838 16.972 - 17.067: 99.2632% ( 2) 00:18:18.838 17.067 - 17.161: 99.2856% ( 3) 00:18:18.838 17.161 - 17.256: 99.2930% ( 1) 00:18:18.838 17.256 - 17.351: 99.3079% ( 2) 00:18:18.838 17.351 - 17.446: 99.3228% ( 2) 00:18:18.838 17.446 - 17.541: 99.3376% ( 2) 00:18:18.838 17.730 - 17.825: 99.3451% ( 1) 00:18:18.838 17.920 - 18.015: 99.3525% ( 1) 00:18:18.838 18.015 - 18.110: 99.3674% ( 2) 00:18:18.838 18.110 - 18.204: 99.3823% ( 2) 00:18:18.838 18.204 - 18.299: 99.3897% ( 1) 00:18:18.838 19.153 - 19.247: 99.3972% ( 1) 00:18:18.838 84.575 - 84.954: 99.4046% ( 1) 00:18:18.838 3519.526 - 3543.799: 99.4121% ( 1) 00:18:18.838 3980.705 - 4004.978: 99.8214% ( 55) 00:18:18.838 4004.978 - 4029.250: 100.0000% ( 24) 00:18:18.838 00:18:18.838 01:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:18:18.838 01:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:18.838 01:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:18:18.838 01:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:18:18.838 01:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:18.838 [ 00:18:18.838 { 00:18:18.838 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:18.838 "subtype": "Discovery", 00:18:18.838 "listen_addresses": [], 00:18:18.838 "allow_any_host": true, 00:18:18.838 "hosts": [] 00:18:18.838 }, 00:18:18.838 { 00:18:18.838 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:18.838 "subtype": "NVMe", 00:18:18.838 "listen_addresses": [ 00:18:18.838 { 00:18:18.838 "trtype": "VFIOUSER", 00:18:18.838 "adrfam": "IPv4", 00:18:18.838 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:18.838 "trsvcid": "0" 00:18:18.838 } 00:18:18.838 ], 00:18:18.838 "allow_any_host": true, 00:18:18.838 "hosts": [], 00:18:18.838 "serial_number": "SPDK1", 00:18:18.838 "model_number": "SPDK bdev Controller", 00:18:18.838 "max_namespaces": 32, 00:18:18.838 "min_cntlid": 1, 00:18:18.838 "max_cntlid": 65519, 00:18:18.838 "namespaces": [ 00:18:18.838 { 00:18:18.838 "nsid": 1, 00:18:18.838 "bdev_name": "Malloc1", 00:18:18.838 "name": "Malloc1", 00:18:18.838 "nguid": "A6F6E8586CF84E70A935AA9232032C30", 00:18:18.838 "uuid": "a6f6e858-6cf8-4e70-a935-aa9232032c30" 00:18:18.838 }, 00:18:18.838 { 00:18:18.838 "nsid": 2, 00:18:18.838 "bdev_name": "Malloc3", 00:18:18.838 "name": "Malloc3", 00:18:18.838 "nguid": "994F8C7FD27C428DB6DE1F05EF2FD397", 00:18:18.838 "uuid": "994f8c7f-d27c-428d-b6de-1f05ef2fd397" 00:18:18.838 } 00:18:18.838 ] 00:18:18.838 }, 00:18:18.838 { 00:18:18.838 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:18.838 "subtype": "NVMe", 00:18:18.838 "listen_addresses": [ 00:18:18.838 { 00:18:18.838 "trtype": "VFIOUSER", 00:18:18.839 "adrfam": "IPv4", 00:18:18.839 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:18.839 "trsvcid": "0" 00:18:18.839 } 00:18:18.839 ], 00:18:18.839 "allow_any_host": true, 00:18:18.839 "hosts": [], 00:18:18.839 "serial_number": "SPDK2", 00:18:18.839 "model_number": "SPDK bdev Controller", 00:18:18.839 "max_namespaces": 32, 00:18:18.839 "min_cntlid": 1, 00:18:18.839 "max_cntlid": 65519, 00:18:18.839 "namespaces": [ 00:18:18.839 { 00:18:18.839 "nsid": 1, 00:18:18.839 "bdev_name": "Malloc2", 00:18:18.839 "name": "Malloc2", 00:18:18.839 "nguid": "19926401784D4BB28F8C35CACEE4BAF3", 00:18:18.839 "uuid": "19926401-784d-4bb2-8f8c-35cacee4baf3" 00:18:18.839 } 00:18:18.839 ] 00:18:18.839 } 00:18:18.839 ] 00:18:18.839 01:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:18.839 01:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1827718 00:18:18.839 01:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:18:18.839 01:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:18.839 01:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:18:18.839 01:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:18.839 01:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:18.839 01:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:18:18.839 01:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:18.839 01:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:18:18.839 EAL: No free 2048 kB hugepages reported on node 1 00:18:19.096 [2024-07-26 01:00:49.347589] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:19.096 Malloc4 00:18:19.096 01:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:18:19.353 [2024-07-26 01:00:49.709242] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:19.353 01:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:19.353 Asynchronous Event Request test 00:18:19.353 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:19.353 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:19.353 Registering asynchronous event callbacks... 00:18:19.353 Starting namespace attribute notice tests for all controllers... 00:18:19.353 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:19.353 aer_cb - Changed Namespace 00:18:19.353 Cleaning up... 00:18:19.610 [ 00:18:19.610 { 00:18:19.610 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:19.610 "subtype": "Discovery", 00:18:19.610 "listen_addresses": [], 00:18:19.610 "allow_any_host": true, 00:18:19.610 "hosts": [] 00:18:19.610 }, 00:18:19.610 { 00:18:19.610 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:19.610 "subtype": "NVMe", 00:18:19.610 "listen_addresses": [ 00:18:19.610 { 00:18:19.610 "trtype": "VFIOUSER", 00:18:19.610 "adrfam": "IPv4", 00:18:19.610 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:19.610 "trsvcid": "0" 00:18:19.610 } 00:18:19.610 ], 00:18:19.610 "allow_any_host": true, 00:18:19.610 "hosts": [], 00:18:19.610 "serial_number": "SPDK1", 00:18:19.610 "model_number": "SPDK bdev Controller", 00:18:19.610 "max_namespaces": 32, 00:18:19.610 "min_cntlid": 1, 00:18:19.610 "max_cntlid": 65519, 00:18:19.610 "namespaces": [ 00:18:19.610 { 00:18:19.610 "nsid": 1, 00:18:19.610 "bdev_name": "Malloc1", 00:18:19.610 "name": "Malloc1", 00:18:19.610 "nguid": "A6F6E8586CF84E70A935AA9232032C30", 00:18:19.610 "uuid": "a6f6e858-6cf8-4e70-a935-aa9232032c30" 00:18:19.610 }, 00:18:19.610 { 00:18:19.610 "nsid": 2, 00:18:19.610 "bdev_name": "Malloc3", 00:18:19.610 "name": "Malloc3", 00:18:19.610 "nguid": "994F8C7FD27C428DB6DE1F05EF2FD397", 00:18:19.610 "uuid": "994f8c7f-d27c-428d-b6de-1f05ef2fd397" 00:18:19.610 } 00:18:19.610 ] 00:18:19.610 }, 00:18:19.610 { 00:18:19.610 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:19.610 "subtype": "NVMe", 00:18:19.610 "listen_addresses": [ 00:18:19.610 { 00:18:19.610 "trtype": "VFIOUSER", 00:18:19.610 "adrfam": "IPv4", 00:18:19.610 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:19.610 "trsvcid": "0" 00:18:19.610 } 00:18:19.610 ], 00:18:19.610 "allow_any_host": true, 00:18:19.610 "hosts": [], 00:18:19.610 "serial_number": "SPDK2", 00:18:19.610 "model_number": "SPDK bdev Controller", 00:18:19.610 "max_namespaces": 32, 00:18:19.610 "min_cntlid": 1, 00:18:19.610 "max_cntlid": 65519, 00:18:19.610 "namespaces": [ 00:18:19.610 { 00:18:19.610 "nsid": 1, 00:18:19.610 "bdev_name": "Malloc2", 00:18:19.610 "name": "Malloc2", 00:18:19.610 "nguid": "19926401784D4BB28F8C35CACEE4BAF3", 00:18:19.610 "uuid": "19926401-784d-4bb2-8f8c-35cacee4baf3" 00:18:19.610 }, 00:18:19.610 { 00:18:19.610 "nsid": 2, 00:18:19.610 "bdev_name": "Malloc4", 00:18:19.610 "name": "Malloc4", 00:18:19.610 "nguid": "B2E859C4DAD345D4881DDA4438470B6C", 00:18:19.610 "uuid": "b2e859c4-dad3-45d4-881d-da4438470b6c" 00:18:19.610 } 00:18:19.610 ] 00:18:19.610 } 00:18:19.610 ] 00:18:19.610 01:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1827718 00:18:19.610 01:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:18:19.610 01:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1821631 00:18:19.610 01:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1821631 ']' 00:18:19.610 01:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1821631 00:18:19.610 01:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:18:19.610 01:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:19.610 01:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1821631 00:18:19.610 01:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:19.610 01:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:19.610 01:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1821631' 00:18:19.610 killing process with pid 1821631 00:18:19.610 01:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1821631 00:18:19.610 01:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1821631 00:18:20.174 01:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:20.174 01:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:20.174 01:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:18:20.174 01:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:18:20.174 01:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:18:20.174 01:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1827859 00:18:20.174 01:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:18:20.174 01:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1827859' 00:18:20.174 Process pid: 1827859 00:18:20.174 01:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:20.174 01:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1827859 00:18:20.174 01:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1827859 ']' 00:18:20.174 01:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.174 01:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:20.174 01:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.174 01:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:20.174 01:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:20.174 [2024-07-26 01:00:50.343180] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:18:20.174 [2024-07-26 01:00:50.344169] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:18:20.174 [2024-07-26 01:00:50.344224] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:20.174 EAL: No free 2048 kB hugepages reported on node 1 00:18:20.174 [2024-07-26 01:00:50.406306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:20.174 [2024-07-26 01:00:50.501255] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:20.174 [2024-07-26 01:00:50.501312] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:20.174 [2024-07-26 01:00:50.501326] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:20.174 [2024-07-26 01:00:50.501337] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:20.174 [2024-07-26 01:00:50.501347] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:20.174 [2024-07-26 01:00:50.501439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:20.174 [2024-07-26 01:00:50.501491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:20.174 [2024-07-26 01:00:50.501609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:20.174 [2024-07-26 01:00:50.501612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.432 [2024-07-26 01:00:50.606493] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:18:20.432 [2024-07-26 01:00:50.606727] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:18:20.432 [2024-07-26 01:00:50.607009] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:18:20.432 [2024-07-26 01:00:50.607631] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:20.432 [2024-07-26 01:00:50.607890] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:18:20.432 01:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:20.432 01:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:18:20.432 01:00:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:21.368 01:00:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:18:21.628 01:00:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:21.628 01:00:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:21.628 01:00:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:21.628 01:00:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:21.628 01:00:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:21.887 Malloc1 00:18:21.887 01:00:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:22.146 01:00:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:22.405 01:00:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:22.662 01:00:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:22.662 01:00:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:22.662 01:00:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:22.920 Malloc2 00:18:22.920 01:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:23.176 01:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:23.432 01:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:23.689 01:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:18:23.689 01:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1827859 00:18:23.689 01:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1827859 ']' 00:18:23.689 01:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1827859 00:18:23.689 01:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:18:23.689 01:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:23.689 01:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1827859 00:18:23.689 01:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:23.689 01:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:23.689 01:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1827859' 00:18:23.689 killing process with pid 1827859 00:18:23.689 01:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1827859 00:18:23.690 01:00:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1827859 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:23.949 00:18:23.949 real 0m52.435s 00:18:23.949 user 3m27.157s 00:18:23.949 sys 0m4.368s 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:23.949 ************************************ 00:18:23.949 END TEST nvmf_vfio_user 00:18:23.949 ************************************ 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:23.949 ************************************ 00:18:23.949 START TEST nvmf_vfio_user_nvme_compliance 00:18:23.949 ************************************ 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:23.949 * Looking for test storage... 00:18:23.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:23.949 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:18:23.950 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:18:23.950 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:18:23.950 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1828454 00:18:23.950 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:23.950 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1828454' 00:18:23.950 Process pid: 1828454 00:18:23.950 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:23.950 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1828454 00:18:23.950 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 1828454 ']' 00:18:23.950 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.950 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:23.950 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.950 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:23.950 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:24.210 [2024-07-26 01:00:54.393219] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:18:24.210 [2024-07-26 01:00:54.393296] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:24.210 EAL: No free 2048 kB hugepages reported on node 1 00:18:24.210 [2024-07-26 01:00:54.450789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:24.210 [2024-07-26 01:00:54.534914] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:24.210 [2024-07-26 01:00:54.534963] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:24.210 [2024-07-26 01:00:54.534986] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:24.210 [2024-07-26 01:00:54.535004] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:24.210 [2024-07-26 01:00:54.535019] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:24.210 [2024-07-26 01:00:54.535145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:24.210 [2024-07-26 01:00:54.535180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:24.210 [2024-07-26 01:00:54.535185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.468 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:24.468 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:18:24.468 01:00:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:18:25.400 01:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:25.400 01:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:18:25.400 01:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:25.400 01:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.400 01:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:25.400 01:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.400 01:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:18:25.400 01:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:25.400 01:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.400 01:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:25.400 malloc0 00:18:25.400 01:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.401 01:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:18:25.401 01:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.401 01:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:25.401 01:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.401 01:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:25.401 01:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.401 01:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:25.401 01:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.401 01:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:25.401 01:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.401 01:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:25.401 01:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.401 01:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:18:25.401 EAL: No free 2048 kB hugepages reported on node 1 00:18:25.658 00:18:25.658 00:18:25.658 CUnit - A unit testing framework for C - Version 2.1-3 00:18:25.658 http://cunit.sourceforge.net/ 00:18:25.658 00:18:25.658 00:18:25.658 Suite: nvme_compliance 00:18:25.658 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-26 01:00:55.889623] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:25.658 [2024-07-26 01:00:55.891101] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:18:25.658 [2024-07-26 01:00:55.891126] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:18:25.659 [2024-07-26 01:00:55.891139] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:18:25.659 [2024-07-26 01:00:55.892638] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:25.659 passed 00:18:25.659 Test: admin_identify_ctrlr_verify_fused ...[2024-07-26 01:00:55.978264] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:25.659 [2024-07-26 01:00:55.984297] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:25.659 passed 00:18:25.659 Test: admin_identify_ns ...[2024-07-26 01:00:56.067696] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:25.916 [2024-07-26 01:00:56.131081] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:18:25.916 [2024-07-26 01:00:56.139075] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:18:25.916 [2024-07-26 01:00:56.160212] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:25.916 passed 00:18:25.916 Test: admin_get_features_mandatory_features ...[2024-07-26 01:00:56.240872] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:25.916 [2024-07-26 01:00:56.245913] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:25.916 passed 00:18:25.916 Test: admin_get_features_optional_features ...[2024-07-26 01:00:56.331512] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:25.916 [2024-07-26 01:00:56.334529] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:26.173 passed 00:18:26.174 Test: admin_set_features_number_of_queues ...[2024-07-26 01:00:56.415750] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:26.174 [2024-07-26 01:00:56.524194] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:26.174 passed 00:18:26.431 Test: admin_get_log_page_mandatory_logs ...[2024-07-26 01:00:56.603836] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:26.431 [2024-07-26 01:00:56.606864] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:26.431 passed 00:18:26.431 Test: admin_get_log_page_with_lpo ...[2024-07-26 01:00:56.691028] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:26.431 [2024-07-26 01:00:56.761073] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:18:26.431 [2024-07-26 01:00:56.774155] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:26.431 passed 00:18:26.431 Test: fabric_property_get ...[2024-07-26 01:00:56.856726] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:26.431 [2024-07-26 01:00:56.857985] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:18:26.689 [2024-07-26 01:00:56.859766] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:26.689 passed 00:18:26.689 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-26 01:00:56.943324] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:26.689 [2024-07-26 01:00:56.947646] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:18:26.689 [2024-07-26 01:00:56.949359] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:26.689 passed 00:18:26.689 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-26 01:00:57.031627] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:26.689 [2024-07-26 01:00:57.115070] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:26.947 [2024-07-26 01:00:57.131066] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:26.947 [2024-07-26 01:00:57.136184] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:26.947 passed 00:18:26.947 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-26 01:00:57.219840] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:26.947 [2024-07-26 01:00:57.221151] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:18:26.947 [2024-07-26 01:00:57.222870] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:26.947 passed 00:18:26.947 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-26 01:00:57.303041] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:27.204 [2024-07-26 01:00:57.381075] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:27.204 [2024-07-26 01:00:57.405083] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:27.204 [2024-07-26 01:00:57.410186] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:27.204 passed 00:18:27.204 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-26 01:00:57.492377] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:27.204 [2024-07-26 01:00:57.493673] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:18:27.204 [2024-07-26 01:00:57.493714] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:18:27.205 [2024-07-26 01:00:57.495405] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:27.205 passed 00:18:27.205 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-26 01:00:57.579638] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:27.462 [2024-07-26 01:00:57.671084] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:18:27.462 [2024-07-26 01:00:57.679068] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:18:27.462 [2024-07-26 01:00:57.687088] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:18:27.462 [2024-07-26 01:00:57.694664] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:18:27.462 [2024-07-26 01:00:57.723176] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:27.462 passed 00:18:27.462 Test: admin_create_io_sq_verify_pc ...[2024-07-26 01:00:57.808475] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:27.462 [2024-07-26 01:00:57.825083] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:18:27.462 [2024-07-26 01:00:57.842766] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:27.462 passed 00:18:27.720 Test: admin_create_io_qp_max_qps ...[2024-07-26 01:00:57.925340] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:28.652 [2024-07-26 01:00:59.034076] nvme_ctrlr.c:5469:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:18:29.218 [2024-07-26 01:00:59.409657] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:29.218 passed 00:18:29.218 Test: admin_create_io_sq_shared_cq ...[2024-07-26 01:00:59.492981] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:29.218 [2024-07-26 01:00:59.622069] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:29.476 [2024-07-26 01:00:59.659163] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:29.476 passed 00:18:29.476 00:18:29.476 Run Summary: Type Total Ran Passed Failed Inactive 00:18:29.476 suites 1 1 n/a 0 0 00:18:29.476 tests 18 18 18 0 0 00:18:29.476 asserts 360 360 360 0 n/a 00:18:29.476 00:18:29.476 Elapsed time = 1.562 seconds 00:18:29.476 01:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1828454 00:18:29.476 01:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 1828454 ']' 00:18:29.476 01:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 1828454 00:18:29.476 01:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:18:29.476 01:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:29.476 01:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1828454 00:18:29.476 01:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:29.476 01:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:29.476 01:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1828454' 00:18:29.476 killing process with pid 1828454 00:18:29.476 01:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 1828454 00:18:29.476 01:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 1828454 00:18:29.735 01:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:18:29.735 01:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:18:29.735 00:18:29.735 real 0m5.696s 00:18:29.735 user 0m16.068s 00:18:29.735 sys 0m0.564s 00:18:29.735 01:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:29.735 01:00:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:29.735 ************************************ 00:18:29.735 END TEST nvmf_vfio_user_nvme_compliance 00:18:29.735 ************************************ 00:18:29.735 01:00:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:29.735 01:00:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:29.735 01:00:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:29.735 01:00:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:29.735 ************************************ 00:18:29.735 START TEST nvmf_vfio_user_fuzz 00:18:29.735 ************************************ 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:29.735 * Looking for test storage... 00:18:29.735 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1829177 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1829177' 00:18:29.735 Process pid: 1829177 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1829177 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 1829177 ']' 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:29.735 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:29.994 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:29.994 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:18:29.994 01:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:18:31.368 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:31.368 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.368 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:31.368 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.368 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:18:31.368 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:31.368 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.368 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:31.368 malloc0 00:18:31.368 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.368 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:18:31.368 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.368 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:31.368 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.368 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:31.368 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.368 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:31.368 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.368 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:31.368 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.368 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:31.368 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.368 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:18:31.368 01:01:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:19:03.424 Fuzzing completed. Shutting down the fuzz application 00:19:03.424 00:19:03.424 Dumping successful admin opcodes: 00:19:03.424 8, 9, 10, 24, 00:19:03.424 Dumping successful io opcodes: 00:19:03.424 0, 00:19:03.424 NS: 0x200003a1ef00 I/O qp, Total commands completed: 601728, total successful commands: 2325, random_seed: 1391755840 00:19:03.424 NS: 0x200003a1ef00 admin qp, Total commands completed: 139315, total successful commands: 1130, random_seed: 892648000 00:19:03.424 01:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:19:03.424 01:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.424 01:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:03.424 01:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.424 01:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1829177 00:19:03.424 01:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 1829177 ']' 00:19:03.424 01:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 1829177 00:19:03.424 01:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:19:03.424 01:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:03.424 01:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1829177 00:19:03.424 01:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:03.424 01:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:03.424 01:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1829177' 00:19:03.424 killing process with pid 1829177 00:19:03.424 01:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 1829177 00:19:03.424 01:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 1829177 00:19:03.424 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:19:03.424 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:19:03.424 00:19:03.424 real 0m32.244s 00:19:03.424 user 0m31.687s 00:19:03.424 sys 0m30.009s 00:19:03.424 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:03.424 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:03.424 ************************************ 00:19:03.424 END TEST nvmf_vfio_user_fuzz 00:19:03.424 ************************************ 00:19:03.424 01:01:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:03.424 01:01:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:03.424 01:01:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:03.424 01:01:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:03.425 ************************************ 00:19:03.425 START TEST nvmf_auth_target 00:19:03.425 ************************************ 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:03.425 * Looking for test storage... 00:19:03.425 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:03.425 01:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:03.993 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:03.993 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:03.993 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:03.994 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:03.994 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:03.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:03.994 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:19:03.994 00:19:03.994 --- 10.0.0.2 ping statistics --- 00:19:03.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.994 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:03.994 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:03.994 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:19:03.994 00:19:03.994 --- 10.0.0.1 ping statistics --- 00:19:03.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.994 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1834606 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1834606 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1834606 ']' 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:03.994 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.560 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1834626 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8906d66430c721c10fcd5b946fc6fa1b178507893cc267e2 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Bwo 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8906d66430c721c10fcd5b946fc6fa1b178507893cc267e2 0 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8906d66430c721c10fcd5b946fc6fa1b178507893cc267e2 0 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8906d66430c721c10fcd5b946fc6fa1b178507893cc267e2 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Bwo 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Bwo 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.Bwo 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e6eaf733500a1faf9c08147313ed84b43ad74e04318ba174a6e7f6d07fbdc85c 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.LhX 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e6eaf733500a1faf9c08147313ed84b43ad74e04318ba174a6e7f6d07fbdc85c 3 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e6eaf733500a1faf9c08147313ed84b43ad74e04318ba174a6e7f6d07fbdc85c 3 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e6eaf733500a1faf9c08147313ed84b43ad74e04318ba174a6e7f6d07fbdc85c 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.LhX 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.LhX 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.LhX 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ee9b910d3d28701362de2da4c1a2451f 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.1KW 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ee9b910d3d28701362de2da4c1a2451f 1 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ee9b910d3d28701362de2da4c1a2451f 1 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ee9b910d3d28701362de2da4c1a2451f 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.1KW 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.1KW 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.1KW 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=680eeb2bb6b0039c27985d733ee29ef305cbc277b9b5686f 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.FIL 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 680eeb2bb6b0039c27985d733ee29ef305cbc277b9b5686f 2 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 680eeb2bb6b0039c27985d733ee29ef305cbc277b9b5686f 2 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=680eeb2bb6b0039c27985d733ee29ef305cbc277b9b5686f 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.FIL 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.FIL 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.FIL 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4bcd763a8d29dd6499e2f7898a95305f3d88ed105d6f42d4 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.vGi 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4bcd763a8d29dd6499e2f7898a95305f3d88ed105d6f42d4 2 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4bcd763a8d29dd6499e2f7898a95305f3d88ed105d6f42d4 2 00:19:04.561 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:04.562 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:04.562 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4bcd763a8d29dd6499e2f7898a95305f3d88ed105d6f42d4 00:19:04.562 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:04.562 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:04.821 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.vGi 00:19:04.821 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.vGi 00:19:04.821 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.vGi 00:19:04.821 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:19:04.821 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:04.821 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:04.821 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:04.821 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:04.821 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:04.821 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:04.821 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=41d65ab2fd0ae93cab9064f2dafd1522 00:19:04.821 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:04.821 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.4g2 00:19:04.821 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 41d65ab2fd0ae93cab9064f2dafd1522 1 00:19:04.821 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 41d65ab2fd0ae93cab9064f2dafd1522 1 00:19:04.821 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:04.821 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:04.821 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=41d65ab2fd0ae93cab9064f2dafd1522 00:19:04.821 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:04.821 01:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:04.821 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.4g2 00:19:04.821 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.4g2 00:19:04.821 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.4g2 00:19:04.821 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:19:04.821 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:04.821 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:04.821 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:04.821 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:04.821 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:04.821 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:04.821 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5d4960c0d389d79c8da989a65dabc72528d7a049b42b7f8d124900d9cb72335c 00:19:04.821 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:04.821 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.0zL 00:19:04.821 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5d4960c0d389d79c8da989a65dabc72528d7a049b42b7f8d124900d9cb72335c 3 00:19:04.821 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5d4960c0d389d79c8da989a65dabc72528d7a049b42b7f8d124900d9cb72335c 3 00:19:04.821 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:04.821 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:04.821 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5d4960c0d389d79c8da989a65dabc72528d7a049b42b7f8d124900d9cb72335c 00:19:04.821 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:04.821 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:04.821 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.0zL 00:19:04.821 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.0zL 00:19:04.821 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.0zL 00:19:04.821 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:19:04.821 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1834606 00:19:04.821 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1834606 ']' 00:19:04.821 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.821 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:04.821 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.822 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:04.822 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.080 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:05.080 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:05.080 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1834626 /var/tmp/host.sock 00:19:05.080 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1834626 ']' 00:19:05.080 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:19:05.080 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:05.080 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:05.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:05.080 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:05.080 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.338 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:05.338 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:05.338 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:19:05.338 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.338 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.338 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.338 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:05.338 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Bwo 00:19:05.338 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.338 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.338 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.338 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Bwo 00:19:05.338 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Bwo 00:19:05.596 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.LhX ]] 00:19:05.596 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.LhX 00:19:05.596 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.596 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.596 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.596 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.LhX 00:19:05.596 01:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.LhX 00:19:05.855 01:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:05.855 01:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.1KW 00:19:05.855 01:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.855 01:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.855 01:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.855 01:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.1KW 00:19:05.855 01:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.1KW 00:19:06.113 01:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.FIL ]] 00:19:06.113 01:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.FIL 00:19:06.113 01:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.113 01:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.113 01:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.113 01:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.FIL 00:19:06.113 01:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.FIL 00:19:06.371 01:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:06.371 01:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.vGi 00:19:06.371 01:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.371 01:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.371 01:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.371 01:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.vGi 00:19:06.371 01:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.vGi 00:19:06.629 01:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.4g2 ]] 00:19:06.629 01:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4g2 00:19:06.629 01:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.629 01:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.629 01:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.629 01:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4g2 00:19:06.629 01:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4g2 00:19:06.887 01:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:06.887 01:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.0zL 00:19:06.887 01:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.887 01:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.887 01:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.887 01:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.0zL 00:19:06.887 01:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.0zL 00:19:07.146 01:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:19:07.146 01:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:07.146 01:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:07.146 01:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:07.146 01:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:07.146 01:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:07.438 01:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:19:07.438 01:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:07.438 01:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:07.438 01:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:07.438 01:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:07.438 01:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.438 01:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.438 01:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.438 01:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.438 01:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.438 01:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.438 01:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.696 00:19:07.696 01:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.696 01:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.696 01:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.954 01:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.954 01:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.954 01:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.954 01:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.954 01:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.954 01:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.954 { 00:19:07.954 "cntlid": 1, 00:19:07.954 "qid": 0, 00:19:07.954 "state": "enabled", 00:19:07.954 "thread": "nvmf_tgt_poll_group_000", 00:19:07.954 "listen_address": { 00:19:07.954 "trtype": "TCP", 00:19:07.954 "adrfam": "IPv4", 00:19:07.954 "traddr": "10.0.0.2", 00:19:07.954 "trsvcid": "4420" 00:19:07.954 }, 00:19:07.954 "peer_address": { 00:19:07.954 "trtype": "TCP", 00:19:07.954 "adrfam": "IPv4", 00:19:07.954 "traddr": "10.0.0.1", 00:19:07.954 "trsvcid": "43950" 00:19:07.954 }, 00:19:07.954 "auth": { 00:19:07.954 "state": "completed", 00:19:07.954 "digest": "sha256", 00:19:07.954 "dhgroup": "null" 00:19:07.954 } 00:19:07.954 } 00:19:07.954 ]' 00:19:07.954 01:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.954 01:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:07.954 01:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.954 01:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:07.954 01:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.954 01:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.954 01:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.212 01:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.212 01:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkwNmQ2NjQzMGM3MjFjMTBmY2Q1Yjk0NmZjNmZhMWIxNzg1MDc4OTNjYzI2N2UygDroDA==: --dhchap-ctrl-secret DHHC-1:03:ZTZlYWY3MzM1MDBhMWZhZjljMDgxNDczMTNlZDg0YjQzYWQ3NGUwNDMxOGJhMTc0YTZlN2Y2ZDA3ZmJkYzg1Y7MKYmg=: 00:19:09.146 01:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.146 01:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:09.146 01:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.146 01:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.146 01:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.146 01:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:09.146 01:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:09.146 01:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:09.404 01:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:19:09.404 01:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.404 01:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:09.404 01:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:09.404 01:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:09.404 01:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.404 01:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.404 01:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.404 01:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.404 01:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.404 01:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.404 01:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.970 00:19:09.970 01:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.970 01:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.970 01:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.970 01:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.970 01:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.970 01:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.970 01:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.228 01:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.228 01:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.228 { 00:19:10.228 "cntlid": 3, 00:19:10.228 "qid": 0, 00:19:10.228 "state": "enabled", 00:19:10.228 "thread": "nvmf_tgt_poll_group_000", 00:19:10.228 "listen_address": { 00:19:10.228 "trtype": "TCP", 00:19:10.228 "adrfam": "IPv4", 00:19:10.228 "traddr": "10.0.0.2", 00:19:10.228 "trsvcid": "4420" 00:19:10.228 }, 00:19:10.228 "peer_address": { 00:19:10.228 "trtype": "TCP", 00:19:10.228 "adrfam": "IPv4", 00:19:10.228 "traddr": "10.0.0.1", 00:19:10.228 "trsvcid": "55468" 00:19:10.228 }, 00:19:10.228 "auth": { 00:19:10.228 "state": "completed", 00:19:10.228 "digest": "sha256", 00:19:10.228 "dhgroup": "null" 00:19:10.228 } 00:19:10.228 } 00:19:10.228 ]' 00:19:10.228 01:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.228 01:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:10.228 01:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.228 01:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:10.228 01:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.228 01:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.228 01:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.228 01:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.486 01:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZWU5YjkxMGQzZDI4NzAxMzYyZGUyZGE0YzFhMjQ1MWabzSDy: --dhchap-ctrl-secret DHHC-1:02:NjgwZWViMmJiNmIwMDM5YzI3OTg1ZDczM2VlMjllZjMwNWNiYzI3N2I5YjU2ODZm6Vpb7Q==: 00:19:11.420 01:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.420 01:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:11.420 01:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.420 01:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.420 01:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.420 01:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:11.420 01:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:11.420 01:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:11.678 01:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:19:11.678 01:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.678 01:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:11.678 01:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:11.678 01:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:11.678 01:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.678 01:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.678 01:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.678 01:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.678 01:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.678 01:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.678 01:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.937 00:19:11.937 01:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.937 01:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.937 01:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.194 01:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.194 01:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.194 01:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.195 01:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.195 01:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.195 01:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:12.195 { 00:19:12.195 "cntlid": 5, 00:19:12.195 "qid": 0, 00:19:12.195 "state": "enabled", 00:19:12.195 "thread": "nvmf_tgt_poll_group_000", 00:19:12.195 "listen_address": { 00:19:12.195 "trtype": "TCP", 00:19:12.195 "adrfam": "IPv4", 00:19:12.195 "traddr": "10.0.0.2", 00:19:12.195 "trsvcid": "4420" 00:19:12.195 }, 00:19:12.195 "peer_address": { 00:19:12.195 "trtype": "TCP", 00:19:12.195 "adrfam": "IPv4", 00:19:12.195 "traddr": "10.0.0.1", 00:19:12.195 "trsvcid": "55502" 00:19:12.195 }, 00:19:12.195 "auth": { 00:19:12.195 "state": "completed", 00:19:12.195 "digest": "sha256", 00:19:12.195 "dhgroup": "null" 00:19:12.195 } 00:19:12.195 } 00:19:12.195 ]' 00:19:12.195 01:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:12.195 01:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:12.195 01:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:12.452 01:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:12.452 01:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:12.452 01:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.452 01:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.452 01:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.710 01:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGJjZDc2M2E4ZDI5ZGQ2NDk5ZTJmNzg5OGE5NTMwNWYzZDg4ZWQxMDVkNmY0MmQ0SvADzQ==: --dhchap-ctrl-secret DHHC-1:01:NDFkNjVhYjJmZDBhZTkzY2FiOTA2NGYyZGFmZDE1MjLxfCW9: 00:19:13.641 01:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.641 01:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:13.641 01:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.641 01:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.641 01:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.641 01:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.641 01:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:13.641 01:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:13.899 01:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:19:13.899 01:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.899 01:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:13.899 01:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:13.899 01:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:13.899 01:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.899 01:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:13.899 01:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.899 01:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.899 01:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.899 01:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:13.899 01:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:14.157 00:19:14.157 01:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.157 01:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.157 01:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.415 01:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.415 01:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.415 01:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.415 01:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.415 01:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.415 01:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.415 { 00:19:14.415 "cntlid": 7, 00:19:14.415 "qid": 0, 00:19:14.415 "state": "enabled", 00:19:14.415 "thread": "nvmf_tgt_poll_group_000", 00:19:14.415 "listen_address": { 00:19:14.415 "trtype": "TCP", 00:19:14.415 "adrfam": "IPv4", 00:19:14.415 "traddr": "10.0.0.2", 00:19:14.415 "trsvcid": "4420" 00:19:14.415 }, 00:19:14.415 "peer_address": { 00:19:14.415 "trtype": "TCP", 00:19:14.415 "adrfam": "IPv4", 00:19:14.415 "traddr": "10.0.0.1", 00:19:14.415 "trsvcid": "55516" 00:19:14.415 }, 00:19:14.415 "auth": { 00:19:14.415 "state": "completed", 00:19:14.415 "digest": "sha256", 00:19:14.415 "dhgroup": "null" 00:19:14.415 } 00:19:14.415 } 00:19:14.415 ]' 00:19:14.415 01:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:14.415 01:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:14.415 01:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:14.415 01:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:14.416 01:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.416 01:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.416 01:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.416 01:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.673 01:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NWQ0OTYwYzBkMzg5ZDc5YzhkYTk4OWE2NWRhYmM3MjUyOGQ3YTA0OWI0MmI3ZjhkMTI0OTAwZDljYjcyMzM1Y2lFBjk=: 00:19:15.606 01:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.606 01:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:15.606 01:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.606 01:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.606 01:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.606 01:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:15.606 01:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.606 01:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:15.606 01:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:15.864 01:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:19:15.864 01:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.864 01:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:15.864 01:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:15.864 01:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:15.864 01:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.864 01:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.864 01:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.864 01:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.864 01:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.864 01:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.864 01:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.430 00:19:16.430 01:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:16.430 01:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.430 01:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.430 01:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.430 01:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.430 01:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.430 01:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.687 01:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.687 01:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:16.687 { 00:19:16.687 "cntlid": 9, 00:19:16.687 "qid": 0, 00:19:16.687 "state": "enabled", 00:19:16.687 "thread": "nvmf_tgt_poll_group_000", 00:19:16.687 "listen_address": { 00:19:16.687 "trtype": "TCP", 00:19:16.687 "adrfam": "IPv4", 00:19:16.687 "traddr": "10.0.0.2", 00:19:16.687 "trsvcid": "4420" 00:19:16.687 }, 00:19:16.687 "peer_address": { 00:19:16.687 "trtype": "TCP", 00:19:16.687 "adrfam": "IPv4", 00:19:16.687 "traddr": "10.0.0.1", 00:19:16.687 "trsvcid": "55542" 00:19:16.687 }, 00:19:16.687 "auth": { 00:19:16.687 "state": "completed", 00:19:16.687 "digest": "sha256", 00:19:16.687 "dhgroup": "ffdhe2048" 00:19:16.687 } 00:19:16.687 } 00:19:16.687 ]' 00:19:16.687 01:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:16.687 01:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:16.687 01:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:16.688 01:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:16.688 01:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.688 01:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.688 01:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.688 01:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.945 01:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkwNmQ2NjQzMGM3MjFjMTBmY2Q1Yjk0NmZjNmZhMWIxNzg1MDc4OTNjYzI2N2UygDroDA==: --dhchap-ctrl-secret DHHC-1:03:ZTZlYWY3MzM1MDBhMWZhZjljMDgxNDczMTNlZDg0YjQzYWQ3NGUwNDMxOGJhMTc0YTZlN2Y2ZDA3ZmJkYzg1Y7MKYmg=: 00:19:17.878 01:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.878 01:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:17.878 01:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.878 01:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.878 01:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.878 01:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.878 01:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:17.878 01:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:18.136 01:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:19:18.136 01:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.136 01:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:18.136 01:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:18.136 01:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:18.136 01:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.136 01:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.136 01:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.136 01:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.136 01:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.136 01:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.136 01:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.394 00:19:18.394 01:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.394 01:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.394 01:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.652 01:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.652 01:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.652 01:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.652 01:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.652 01:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.652 01:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.652 { 00:19:18.652 "cntlid": 11, 00:19:18.652 "qid": 0, 00:19:18.652 "state": "enabled", 00:19:18.652 "thread": "nvmf_tgt_poll_group_000", 00:19:18.652 "listen_address": { 00:19:18.652 "trtype": "TCP", 00:19:18.652 "adrfam": "IPv4", 00:19:18.652 "traddr": "10.0.0.2", 00:19:18.652 "trsvcid": "4420" 00:19:18.652 }, 00:19:18.652 "peer_address": { 00:19:18.652 "trtype": "TCP", 00:19:18.652 "adrfam": "IPv4", 00:19:18.652 "traddr": "10.0.0.1", 00:19:18.652 "trsvcid": "55558" 00:19:18.652 }, 00:19:18.652 "auth": { 00:19:18.652 "state": "completed", 00:19:18.652 "digest": "sha256", 00:19:18.652 "dhgroup": "ffdhe2048" 00:19:18.652 } 00:19:18.652 } 00:19:18.652 ]' 00:19:18.652 01:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.652 01:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:18.652 01:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.910 01:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:18.910 01:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.910 01:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.910 01:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.910 01:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.168 01:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZWU5YjkxMGQzZDI4NzAxMzYyZGUyZGE0YzFhMjQ1MWabzSDy: --dhchap-ctrl-secret DHHC-1:02:NjgwZWViMmJiNmIwMDM5YzI3OTg1ZDczM2VlMjllZjMwNWNiYzI3N2I5YjU2ODZm6Vpb7Q==: 00:19:20.101 01:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.101 01:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:20.101 01:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.101 01:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.101 01:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.101 01:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:20.101 01:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:20.101 01:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:20.359 01:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:19:20.359 01:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:20.359 01:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:20.359 01:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:20.359 01:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:20.359 01:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.359 01:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.359 01:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.359 01:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.359 01:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.359 01:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.359 01:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.617 00:19:20.617 01:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.617 01:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.617 01:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.875 01:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.875 01:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.875 01:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.875 01:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.875 01:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.875 01:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.875 { 00:19:20.875 "cntlid": 13, 00:19:20.875 "qid": 0, 00:19:20.875 "state": "enabled", 00:19:20.875 "thread": "nvmf_tgt_poll_group_000", 00:19:20.875 "listen_address": { 00:19:20.875 "trtype": "TCP", 00:19:20.875 "adrfam": "IPv4", 00:19:20.875 "traddr": "10.0.0.2", 00:19:20.875 "trsvcid": "4420" 00:19:20.875 }, 00:19:20.875 "peer_address": { 00:19:20.875 "trtype": "TCP", 00:19:20.875 "adrfam": "IPv4", 00:19:20.875 "traddr": "10.0.0.1", 00:19:20.875 "trsvcid": "41342" 00:19:20.875 }, 00:19:20.875 "auth": { 00:19:20.875 "state": "completed", 00:19:20.875 "digest": "sha256", 00:19:20.875 "dhgroup": "ffdhe2048" 00:19:20.875 } 00:19:20.875 } 00:19:20.875 ]' 00:19:20.875 01:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.875 01:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:20.875 01:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.875 01:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:20.875 01:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:21.133 01:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.133 01:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.133 01:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.390 01:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGJjZDc2M2E4ZDI5ZGQ2NDk5ZTJmNzg5OGE5NTMwNWYzZDg4ZWQxMDVkNmY0MmQ0SvADzQ==: --dhchap-ctrl-secret DHHC-1:01:NDFkNjVhYjJmZDBhZTkzY2FiOTA2NGYyZGFmZDE1MjLxfCW9: 00:19:22.327 01:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.327 01:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:22.327 01:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.327 01:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.327 01:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.327 01:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.327 01:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:22.327 01:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:22.585 01:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:19:22.585 01:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.585 01:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:22.585 01:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:22.585 01:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:22.585 01:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.585 01:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:22.585 01:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.585 01:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.585 01:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.585 01:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:22.585 01:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:22.842 00:19:22.842 01:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.842 01:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.842 01:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.099 01:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.099 01:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.099 01:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.099 01:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.099 01:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.099 01:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.099 { 00:19:23.099 "cntlid": 15, 00:19:23.099 "qid": 0, 00:19:23.099 "state": "enabled", 00:19:23.099 "thread": "nvmf_tgt_poll_group_000", 00:19:23.099 "listen_address": { 00:19:23.099 "trtype": "TCP", 00:19:23.099 "adrfam": "IPv4", 00:19:23.099 "traddr": "10.0.0.2", 00:19:23.099 "trsvcid": "4420" 00:19:23.099 }, 00:19:23.099 "peer_address": { 00:19:23.099 "trtype": "TCP", 00:19:23.099 "adrfam": "IPv4", 00:19:23.099 "traddr": "10.0.0.1", 00:19:23.099 "trsvcid": "41360" 00:19:23.099 }, 00:19:23.099 "auth": { 00:19:23.099 "state": "completed", 00:19:23.099 "digest": "sha256", 00:19:23.099 "dhgroup": "ffdhe2048" 00:19:23.099 } 00:19:23.099 } 00:19:23.099 ]' 00:19:23.099 01:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.099 01:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:23.099 01:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.099 01:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:23.099 01:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:23.099 01:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.099 01:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.099 01:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.387 01:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NWQ0OTYwYzBkMzg5ZDc5YzhkYTk4OWE2NWRhYmM3MjUyOGQ3YTA0OWI0MmI3ZjhkMTI0OTAwZDljYjcyMzM1Y2lFBjk=: 00:19:24.325 01:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.325 01:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:24.325 01:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.325 01:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.325 01:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.325 01:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:24.325 01:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.325 01:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:24.325 01:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:24.582 01:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:19:24.583 01:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.583 01:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:24.583 01:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:24.583 01:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:24.583 01:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.583 01:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.583 01:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.583 01:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.583 01:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.583 01:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.583 01:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.840 00:19:24.840 01:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.840 01:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.840 01:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.098 01:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.098 01:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.098 01:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.098 01:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.098 01:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.098 01:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:25.098 { 00:19:25.098 "cntlid": 17, 00:19:25.098 "qid": 0, 00:19:25.098 "state": "enabled", 00:19:25.098 "thread": "nvmf_tgt_poll_group_000", 00:19:25.098 "listen_address": { 00:19:25.098 "trtype": "TCP", 00:19:25.098 "adrfam": "IPv4", 00:19:25.098 "traddr": "10.0.0.2", 00:19:25.098 "trsvcid": "4420" 00:19:25.098 }, 00:19:25.098 "peer_address": { 00:19:25.098 "trtype": "TCP", 00:19:25.098 "adrfam": "IPv4", 00:19:25.098 "traddr": "10.0.0.1", 00:19:25.098 "trsvcid": "41380" 00:19:25.098 }, 00:19:25.098 "auth": { 00:19:25.098 "state": "completed", 00:19:25.098 "digest": "sha256", 00:19:25.098 "dhgroup": "ffdhe3072" 00:19:25.098 } 00:19:25.098 } 00:19:25.098 ]' 00:19:25.098 01:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:25.356 01:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:25.356 01:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.356 01:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:25.356 01:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.356 01:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.356 01:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.356 01:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.612 01:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkwNmQ2NjQzMGM3MjFjMTBmY2Q1Yjk0NmZjNmZhMWIxNzg1MDc4OTNjYzI2N2UygDroDA==: --dhchap-ctrl-secret DHHC-1:03:ZTZlYWY3MzM1MDBhMWZhZjljMDgxNDczMTNlZDg0YjQzYWQ3NGUwNDMxOGJhMTc0YTZlN2Y2ZDA3ZmJkYzg1Y7MKYmg=: 00:19:26.543 01:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.543 01:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:26.543 01:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.543 01:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.543 01:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.543 01:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.543 01:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:26.543 01:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:26.801 01:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:19:26.801 01:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.801 01:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:26.801 01:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:26.801 01:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:26.801 01:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.801 01:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.801 01:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.801 01:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.801 01:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.801 01:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.801 01:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.058 00:19:27.058 01:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:27.058 01:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.058 01:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.316 01:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.316 01:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.316 01:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.316 01:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.316 01:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.316 01:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.316 { 00:19:27.316 "cntlid": 19, 00:19:27.316 "qid": 0, 00:19:27.316 "state": "enabled", 00:19:27.316 "thread": "nvmf_tgt_poll_group_000", 00:19:27.316 "listen_address": { 00:19:27.316 "trtype": "TCP", 00:19:27.316 "adrfam": "IPv4", 00:19:27.316 "traddr": "10.0.0.2", 00:19:27.316 "trsvcid": "4420" 00:19:27.316 }, 00:19:27.316 "peer_address": { 00:19:27.316 "trtype": "TCP", 00:19:27.316 "adrfam": "IPv4", 00:19:27.316 "traddr": "10.0.0.1", 00:19:27.316 "trsvcid": "41420" 00:19:27.316 }, 00:19:27.316 "auth": { 00:19:27.316 "state": "completed", 00:19:27.316 "digest": "sha256", 00:19:27.316 "dhgroup": "ffdhe3072" 00:19:27.316 } 00:19:27.316 } 00:19:27.316 ]' 00:19:27.316 01:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.316 01:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:27.316 01:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.572 01:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:27.572 01:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.572 01:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.572 01:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.572 01:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.829 01:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZWU5YjkxMGQzZDI4NzAxMzYyZGUyZGE0YzFhMjQ1MWabzSDy: --dhchap-ctrl-secret DHHC-1:02:NjgwZWViMmJiNmIwMDM5YzI3OTg1ZDczM2VlMjllZjMwNWNiYzI3N2I5YjU2ODZm6Vpb7Q==: 00:19:28.761 01:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.761 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.761 01:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.761 01:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.761 01:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.761 01:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.761 01:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.761 01:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:28.761 01:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:29.018 01:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:19:29.018 01:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:29.018 01:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:29.018 01:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:29.018 01:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:29.018 01:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.018 01:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.018 01:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.018 01:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.018 01:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.018 01:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.018 01:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.274 00:19:29.274 01:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.274 01:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.274 01:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.532 01:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.532 01:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.532 01:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.532 01:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.532 01:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.532 01:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.532 { 00:19:29.532 "cntlid": 21, 00:19:29.532 "qid": 0, 00:19:29.532 "state": "enabled", 00:19:29.532 "thread": "nvmf_tgt_poll_group_000", 00:19:29.532 "listen_address": { 00:19:29.532 "trtype": "TCP", 00:19:29.532 "adrfam": "IPv4", 00:19:29.532 "traddr": "10.0.0.2", 00:19:29.532 "trsvcid": "4420" 00:19:29.532 }, 00:19:29.532 "peer_address": { 00:19:29.532 "trtype": "TCP", 00:19:29.532 "adrfam": "IPv4", 00:19:29.532 "traddr": "10.0.0.1", 00:19:29.532 "trsvcid": "40532" 00:19:29.532 }, 00:19:29.532 "auth": { 00:19:29.532 "state": "completed", 00:19:29.532 "digest": "sha256", 00:19:29.532 "dhgroup": "ffdhe3072" 00:19:29.532 } 00:19:29.532 } 00:19:29.532 ]' 00:19:29.532 01:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.532 01:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:29.532 01:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.532 01:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:29.532 01:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.790 01:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.790 01:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.790 01:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.790 01:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGJjZDc2M2E4ZDI5ZGQ2NDk5ZTJmNzg5OGE5NTMwNWYzZDg4ZWQxMDVkNmY0MmQ0SvADzQ==: --dhchap-ctrl-secret DHHC-1:01:NDFkNjVhYjJmZDBhZTkzY2FiOTA2NGYyZGFmZDE1MjLxfCW9: 00:19:30.722 01:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.722 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.722 01:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:30.722 01:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.722 01:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.722 01:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.722 01:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.722 01:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:30.722 01:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:30.979 01:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:19:30.979 01:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.979 01:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:30.979 01:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:30.979 01:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:30.979 01:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.979 01:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:30.979 01:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.979 01:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.979 01:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.979 01:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:30.979 01:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:31.544 00:19:31.544 01:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.544 01:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.544 01:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.801 01:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.801 01:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.801 01:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.801 01:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.801 01:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.801 01:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.801 { 00:19:31.801 "cntlid": 23, 00:19:31.801 "qid": 0, 00:19:31.801 "state": "enabled", 00:19:31.801 "thread": "nvmf_tgt_poll_group_000", 00:19:31.801 "listen_address": { 00:19:31.801 "trtype": "TCP", 00:19:31.801 "adrfam": "IPv4", 00:19:31.801 "traddr": "10.0.0.2", 00:19:31.801 "trsvcid": "4420" 00:19:31.801 }, 00:19:31.801 "peer_address": { 00:19:31.801 "trtype": "TCP", 00:19:31.801 "adrfam": "IPv4", 00:19:31.801 "traddr": "10.0.0.1", 00:19:31.801 "trsvcid": "40558" 00:19:31.801 }, 00:19:31.801 "auth": { 00:19:31.801 "state": "completed", 00:19:31.801 "digest": "sha256", 00:19:31.801 "dhgroup": "ffdhe3072" 00:19:31.801 } 00:19:31.801 } 00:19:31.801 ]' 00:19:31.801 01:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.801 01:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.801 01:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.801 01:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:31.801 01:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.801 01:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.801 01:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.801 01:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.059 01:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NWQ0OTYwYzBkMzg5ZDc5YzhkYTk4OWE2NWRhYmM3MjUyOGQ3YTA0OWI0MmI3ZjhkMTI0OTAwZDljYjcyMzM1Y2lFBjk=: 00:19:32.991 01:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.991 01:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:32.991 01:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.991 01:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.991 01:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.991 01:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:32.991 01:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:32.991 01:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:32.991 01:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:33.247 01:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:19:33.247 01:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.247 01:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:33.247 01:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:33.247 01:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:33.247 01:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.247 01:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.247 01:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.247 01:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.247 01:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.247 01:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.247 01:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.810 00:19:33.810 01:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.810 01:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.810 01:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.067 01:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.067 01:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.067 01:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.067 01:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.067 01:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.067 01:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.067 { 00:19:34.067 "cntlid": 25, 00:19:34.067 "qid": 0, 00:19:34.067 "state": "enabled", 00:19:34.067 "thread": "nvmf_tgt_poll_group_000", 00:19:34.067 "listen_address": { 00:19:34.067 "trtype": "TCP", 00:19:34.067 "adrfam": "IPv4", 00:19:34.067 "traddr": "10.0.0.2", 00:19:34.067 "trsvcid": "4420" 00:19:34.067 }, 00:19:34.067 "peer_address": { 00:19:34.067 "trtype": "TCP", 00:19:34.067 "adrfam": "IPv4", 00:19:34.067 "traddr": "10.0.0.1", 00:19:34.067 "trsvcid": "40592" 00:19:34.067 }, 00:19:34.067 "auth": { 00:19:34.067 "state": "completed", 00:19:34.067 "digest": "sha256", 00:19:34.067 "dhgroup": "ffdhe4096" 00:19:34.067 } 00:19:34.067 } 00:19:34.067 ]' 00:19:34.067 01:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.067 01:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:34.067 01:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.067 01:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:34.067 01:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.067 01:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.067 01:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.067 01:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.324 01:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkwNmQ2NjQzMGM3MjFjMTBmY2Q1Yjk0NmZjNmZhMWIxNzg1MDc4OTNjYzI2N2UygDroDA==: --dhchap-ctrl-secret DHHC-1:03:ZTZlYWY3MzM1MDBhMWZhZjljMDgxNDczMTNlZDg0YjQzYWQ3NGUwNDMxOGJhMTc0YTZlN2Y2ZDA3ZmJkYzg1Y7MKYmg=: 00:19:35.256 01:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.256 01:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:35.256 01:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.256 01:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.256 01:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.256 01:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.256 01:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:35.256 01:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:35.514 01:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:19:35.514 01:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.514 01:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:35.514 01:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:35.514 01:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:35.514 01:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.514 01:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.514 01:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.514 01:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.514 01:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.514 01:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.514 01:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.080 00:19:36.080 01:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.080 01:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.080 01:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.080 01:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.338 01:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.338 01:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.338 01:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.338 01:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.338 01:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.338 { 00:19:36.338 "cntlid": 27, 00:19:36.338 "qid": 0, 00:19:36.338 "state": "enabled", 00:19:36.338 "thread": "nvmf_tgt_poll_group_000", 00:19:36.338 "listen_address": { 00:19:36.338 "trtype": "TCP", 00:19:36.338 "adrfam": "IPv4", 00:19:36.338 "traddr": "10.0.0.2", 00:19:36.338 "trsvcid": "4420" 00:19:36.338 }, 00:19:36.338 "peer_address": { 00:19:36.338 "trtype": "TCP", 00:19:36.338 "adrfam": "IPv4", 00:19:36.338 "traddr": "10.0.0.1", 00:19:36.338 "trsvcid": "40632" 00:19:36.338 }, 00:19:36.338 "auth": { 00:19:36.338 "state": "completed", 00:19:36.338 "digest": "sha256", 00:19:36.338 "dhgroup": "ffdhe4096" 00:19:36.338 } 00:19:36.338 } 00:19:36.338 ]' 00:19:36.338 01:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.338 01:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:36.338 01:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.338 01:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:36.338 01:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.338 01:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.338 01:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.338 01:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.596 01:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZWU5YjkxMGQzZDI4NzAxMzYyZGUyZGE0YzFhMjQ1MWabzSDy: --dhchap-ctrl-secret DHHC-1:02:NjgwZWViMmJiNmIwMDM5YzI3OTg1ZDczM2VlMjllZjMwNWNiYzI3N2I5YjU2ODZm6Vpb7Q==: 00:19:37.527 01:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.527 01:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:37.527 01:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.527 01:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.527 01:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.527 01:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.527 01:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:37.527 01:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:37.785 01:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:19:37.785 01:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.785 01:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:37.785 01:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:37.785 01:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:37.785 01:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.785 01:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.785 01:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.785 01:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.785 01:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.785 01:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.785 01:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.350 00:19:38.350 01:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.350 01:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.350 01:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.350 01:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.350 01:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.350 01:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.350 01:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.350 01:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.350 01:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.350 { 00:19:38.350 "cntlid": 29, 00:19:38.350 "qid": 0, 00:19:38.350 "state": "enabled", 00:19:38.350 "thread": "nvmf_tgt_poll_group_000", 00:19:38.350 "listen_address": { 00:19:38.350 "trtype": "TCP", 00:19:38.350 "adrfam": "IPv4", 00:19:38.350 "traddr": "10.0.0.2", 00:19:38.350 "trsvcid": "4420" 00:19:38.350 }, 00:19:38.350 "peer_address": { 00:19:38.350 "trtype": "TCP", 00:19:38.350 "adrfam": "IPv4", 00:19:38.350 "traddr": "10.0.0.1", 00:19:38.350 "trsvcid": "40666" 00:19:38.350 }, 00:19:38.350 "auth": { 00:19:38.350 "state": "completed", 00:19:38.350 "digest": "sha256", 00:19:38.350 "dhgroup": "ffdhe4096" 00:19:38.350 } 00:19:38.350 } 00:19:38.350 ]' 00:19:38.350 01:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.608 01:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:38.608 01:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.608 01:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:38.608 01:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.608 01:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.608 01:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.608 01:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.866 01:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGJjZDc2M2E4ZDI5ZGQ2NDk5ZTJmNzg5OGE5NTMwNWYzZDg4ZWQxMDVkNmY0MmQ0SvADzQ==: --dhchap-ctrl-secret DHHC-1:01:NDFkNjVhYjJmZDBhZTkzY2FiOTA2NGYyZGFmZDE1MjLxfCW9: 00:19:39.798 01:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.798 01:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:39.798 01:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.798 01:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.798 01:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.798 01:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.798 01:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:39.798 01:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:40.056 01:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:19:40.056 01:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.056 01:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:40.056 01:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:40.056 01:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:40.056 01:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.056 01:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:40.056 01:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.056 01:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.056 01:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.056 01:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:40.056 01:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:40.315 00:19:40.574 01:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:40.574 01:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.574 01:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.574 01:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.574 01:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.574 01:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.574 01:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.832 01:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.832 01:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.832 { 00:19:40.832 "cntlid": 31, 00:19:40.832 "qid": 0, 00:19:40.832 "state": "enabled", 00:19:40.832 "thread": "nvmf_tgt_poll_group_000", 00:19:40.832 "listen_address": { 00:19:40.832 "trtype": "TCP", 00:19:40.832 "adrfam": "IPv4", 00:19:40.832 "traddr": "10.0.0.2", 00:19:40.832 "trsvcid": "4420" 00:19:40.832 }, 00:19:40.832 "peer_address": { 00:19:40.832 "trtype": "TCP", 00:19:40.832 "adrfam": "IPv4", 00:19:40.832 "traddr": "10.0.0.1", 00:19:40.832 "trsvcid": "57782" 00:19:40.832 }, 00:19:40.832 "auth": { 00:19:40.832 "state": "completed", 00:19:40.832 "digest": "sha256", 00:19:40.832 "dhgroup": "ffdhe4096" 00:19:40.832 } 00:19:40.832 } 00:19:40.832 ]' 00:19:40.832 01:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.832 01:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.832 01:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.832 01:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:40.832 01:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.832 01:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.832 01:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.832 01:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.090 01:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NWQ0OTYwYzBkMzg5ZDc5YzhkYTk4OWE2NWRhYmM3MjUyOGQ3YTA0OWI0MmI3ZjhkMTI0OTAwZDljYjcyMzM1Y2lFBjk=: 00:19:42.025 01:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.025 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.025 01:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:42.025 01:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.025 01:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.025 01:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.025 01:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:42.025 01:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.025 01:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:42.025 01:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:42.283 01:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:19:42.283 01:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.283 01:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:42.283 01:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:42.283 01:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:42.283 01:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.283 01:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.283 01:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.283 01:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.283 01:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.283 01:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.283 01:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.851 00:19:42.852 01:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:42.852 01:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:42.852 01:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.109 01:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.109 01:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.109 01:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.109 01:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.109 01:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.109 01:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.109 { 00:19:43.109 "cntlid": 33, 00:19:43.109 "qid": 0, 00:19:43.109 "state": "enabled", 00:19:43.109 "thread": "nvmf_tgt_poll_group_000", 00:19:43.109 "listen_address": { 00:19:43.109 "trtype": "TCP", 00:19:43.109 "adrfam": "IPv4", 00:19:43.109 "traddr": "10.0.0.2", 00:19:43.109 "trsvcid": "4420" 00:19:43.109 }, 00:19:43.109 "peer_address": { 00:19:43.109 "trtype": "TCP", 00:19:43.109 "adrfam": "IPv4", 00:19:43.109 "traddr": "10.0.0.1", 00:19:43.109 "trsvcid": "57810" 00:19:43.109 }, 00:19:43.109 "auth": { 00:19:43.109 "state": "completed", 00:19:43.109 "digest": "sha256", 00:19:43.109 "dhgroup": "ffdhe6144" 00:19:43.109 } 00:19:43.109 } 00:19:43.109 ]' 00:19:43.109 01:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.109 01:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:43.109 01:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.109 01:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:43.109 01:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.366 01:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.366 01:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.366 01:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.625 01:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkwNmQ2NjQzMGM3MjFjMTBmY2Q1Yjk0NmZjNmZhMWIxNzg1MDc4OTNjYzI2N2UygDroDA==: --dhchap-ctrl-secret DHHC-1:03:ZTZlYWY3MzM1MDBhMWZhZjljMDgxNDczMTNlZDg0YjQzYWQ3NGUwNDMxOGJhMTc0YTZlN2Y2ZDA3ZmJkYzg1Y7MKYmg=: 00:19:44.560 01:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.560 01:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:44.560 01:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.560 01:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.560 01:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.560 01:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:44.560 01:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:44.560 01:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:44.818 01:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:19:44.818 01:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.818 01:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:44.818 01:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:44.818 01:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:44.818 01:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.818 01:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.818 01:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.818 01:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.818 01:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.818 01:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.818 01:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.384 00:19:45.384 01:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.384 01:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:45.384 01:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.641 01:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.641 01:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.641 01:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.641 01:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.641 01:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.641 01:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:45.641 { 00:19:45.641 "cntlid": 35, 00:19:45.641 "qid": 0, 00:19:45.641 "state": "enabled", 00:19:45.641 "thread": "nvmf_tgt_poll_group_000", 00:19:45.641 "listen_address": { 00:19:45.641 "trtype": "TCP", 00:19:45.641 "adrfam": "IPv4", 00:19:45.641 "traddr": "10.0.0.2", 00:19:45.641 "trsvcid": "4420" 00:19:45.641 }, 00:19:45.641 "peer_address": { 00:19:45.641 "trtype": "TCP", 00:19:45.641 "adrfam": "IPv4", 00:19:45.641 "traddr": "10.0.0.1", 00:19:45.641 "trsvcid": "57832" 00:19:45.641 }, 00:19:45.641 "auth": { 00:19:45.641 "state": "completed", 00:19:45.641 "digest": "sha256", 00:19:45.641 "dhgroup": "ffdhe6144" 00:19:45.641 } 00:19:45.641 } 00:19:45.641 ]' 00:19:45.641 01:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.641 01:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.641 01:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.641 01:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:45.641 01:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.899 01:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.899 01:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.899 01:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.899 01:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZWU5YjkxMGQzZDI4NzAxMzYyZGUyZGE0YzFhMjQ1MWabzSDy: --dhchap-ctrl-secret DHHC-1:02:NjgwZWViMmJiNmIwMDM5YzI3OTg1ZDczM2VlMjllZjMwNWNiYzI3N2I5YjU2ODZm6Vpb7Q==: 00:19:47.273 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.273 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:47.273 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.273 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.273 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.273 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.273 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:47.273 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:47.273 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:19:47.273 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:47.273 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:47.273 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:47.273 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:47.273 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.273 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.273 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.273 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.273 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.273 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.273 01:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.839 00:19:47.839 01:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:47.839 01:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:47.839 01:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.097 01:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.097 01:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.097 01:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.097 01:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.097 01:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.097 01:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:48.097 { 00:19:48.097 "cntlid": 37, 00:19:48.097 "qid": 0, 00:19:48.097 "state": "enabled", 00:19:48.097 "thread": "nvmf_tgt_poll_group_000", 00:19:48.097 "listen_address": { 00:19:48.097 "trtype": "TCP", 00:19:48.097 "adrfam": "IPv4", 00:19:48.097 "traddr": "10.0.0.2", 00:19:48.097 "trsvcid": "4420" 00:19:48.097 }, 00:19:48.097 "peer_address": { 00:19:48.097 "trtype": "TCP", 00:19:48.097 "adrfam": "IPv4", 00:19:48.097 "traddr": "10.0.0.1", 00:19:48.097 "trsvcid": "57846" 00:19:48.097 }, 00:19:48.097 "auth": { 00:19:48.097 "state": "completed", 00:19:48.097 "digest": "sha256", 00:19:48.097 "dhgroup": "ffdhe6144" 00:19:48.097 } 00:19:48.097 } 00:19:48.097 ]' 00:19:48.097 01:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:48.097 01:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:48.097 01:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:48.097 01:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:48.097 01:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:48.355 01:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.355 01:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.355 01:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.355 01:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGJjZDc2M2E4ZDI5ZGQ2NDk5ZTJmNzg5OGE5NTMwNWYzZDg4ZWQxMDVkNmY0MmQ0SvADzQ==: --dhchap-ctrl-secret DHHC-1:01:NDFkNjVhYjJmZDBhZTkzY2FiOTA2NGYyZGFmZDE1MjLxfCW9: 00:19:49.731 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.731 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:49.731 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.731 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.731 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.731 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:49.731 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:49.731 01:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:49.731 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:19:49.731 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:49.731 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:49.731 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:49.731 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:49.731 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.731 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:49.731 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.731 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.731 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.731 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:49.731 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:50.299 00:19:50.299 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:50.299 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:50.299 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.557 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.557 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.557 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.557 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.557 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.557 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:50.557 { 00:19:50.557 "cntlid": 39, 00:19:50.557 "qid": 0, 00:19:50.557 "state": "enabled", 00:19:50.557 "thread": "nvmf_tgt_poll_group_000", 00:19:50.557 "listen_address": { 00:19:50.557 "trtype": "TCP", 00:19:50.557 "adrfam": "IPv4", 00:19:50.557 "traddr": "10.0.0.2", 00:19:50.557 "trsvcid": "4420" 00:19:50.557 }, 00:19:50.557 "peer_address": { 00:19:50.557 "trtype": "TCP", 00:19:50.557 "adrfam": "IPv4", 00:19:50.557 "traddr": "10.0.0.1", 00:19:50.557 "trsvcid": "37994" 00:19:50.557 }, 00:19:50.557 "auth": { 00:19:50.557 "state": "completed", 00:19:50.557 "digest": "sha256", 00:19:50.557 "dhgroup": "ffdhe6144" 00:19:50.557 } 00:19:50.557 } 00:19:50.557 ]' 00:19:50.558 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:50.558 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:50.558 01:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:50.815 01:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:50.815 01:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:50.815 01:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.815 01:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.816 01:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.074 01:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NWQ0OTYwYzBkMzg5ZDc5YzhkYTk4OWE2NWRhYmM3MjUyOGQ3YTA0OWI0MmI3ZjhkMTI0OTAwZDljYjcyMzM1Y2lFBjk=: 00:19:52.009 01:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.009 01:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:52.009 01:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.009 01:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.009 01:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.009 01:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:52.009 01:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:52.009 01:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:52.009 01:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:52.268 01:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:19:52.268 01:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:52.268 01:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:52.268 01:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:52.268 01:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:52.268 01:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.268 01:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.268 01:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.268 01:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.268 01:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.268 01:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.268 01:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.205 00:19:53.205 01:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:53.205 01:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:53.205 01:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.461 01:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.461 01:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.461 01:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.461 01:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.461 01:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.461 01:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:53.461 { 00:19:53.461 "cntlid": 41, 00:19:53.461 "qid": 0, 00:19:53.461 "state": "enabled", 00:19:53.461 "thread": "nvmf_tgt_poll_group_000", 00:19:53.461 "listen_address": { 00:19:53.461 "trtype": "TCP", 00:19:53.461 "adrfam": "IPv4", 00:19:53.461 "traddr": "10.0.0.2", 00:19:53.461 "trsvcid": "4420" 00:19:53.461 }, 00:19:53.461 "peer_address": { 00:19:53.461 "trtype": "TCP", 00:19:53.461 "adrfam": "IPv4", 00:19:53.461 "traddr": "10.0.0.1", 00:19:53.461 "trsvcid": "38030" 00:19:53.461 }, 00:19:53.461 "auth": { 00:19:53.461 "state": "completed", 00:19:53.461 "digest": "sha256", 00:19:53.461 "dhgroup": "ffdhe8192" 00:19:53.461 } 00:19:53.461 } 00:19:53.461 ]' 00:19:53.461 01:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:53.461 01:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.461 01:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:53.461 01:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:53.461 01:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:53.461 01:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.461 01:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.461 01:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.720 01:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkwNmQ2NjQzMGM3MjFjMTBmY2Q1Yjk0NmZjNmZhMWIxNzg1MDc4OTNjYzI2N2UygDroDA==: --dhchap-ctrl-secret DHHC-1:03:ZTZlYWY3MzM1MDBhMWZhZjljMDgxNDczMTNlZDg0YjQzYWQ3NGUwNDMxOGJhMTc0YTZlN2Y2ZDA3ZmJkYzg1Y7MKYmg=: 00:19:54.657 01:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.657 01:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:54.917 01:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.917 01:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.917 01:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.917 01:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:54.917 01:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:54.917 01:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:54.917 01:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:19:54.917 01:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:54.917 01:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:54.917 01:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:54.918 01:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:54.918 01:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.918 01:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.918 01:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.918 01:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.178 01:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.178 01:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.178 01:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.158 00:19:56.158 01:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:56.158 01:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:56.158 01:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.158 01:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.158 01:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.158 01:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.158 01:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.158 01:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.158 01:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:56.158 { 00:19:56.158 "cntlid": 43, 00:19:56.158 "qid": 0, 00:19:56.158 "state": "enabled", 00:19:56.158 "thread": "nvmf_tgt_poll_group_000", 00:19:56.158 "listen_address": { 00:19:56.158 "trtype": "TCP", 00:19:56.158 "adrfam": "IPv4", 00:19:56.158 "traddr": "10.0.0.2", 00:19:56.158 "trsvcid": "4420" 00:19:56.158 }, 00:19:56.158 "peer_address": { 00:19:56.158 "trtype": "TCP", 00:19:56.158 "adrfam": "IPv4", 00:19:56.158 "traddr": "10.0.0.1", 00:19:56.158 "trsvcid": "38064" 00:19:56.158 }, 00:19:56.158 "auth": { 00:19:56.158 "state": "completed", 00:19:56.158 "digest": "sha256", 00:19:56.158 "dhgroup": "ffdhe8192" 00:19:56.158 } 00:19:56.158 } 00:19:56.158 ]' 00:19:56.158 01:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:56.158 01:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:56.158 01:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:56.417 01:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:56.417 01:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:56.417 01:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.417 01:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.417 01:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.676 01:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZWU5YjkxMGQzZDI4NzAxMzYyZGUyZGE0YzFhMjQ1MWabzSDy: --dhchap-ctrl-secret DHHC-1:02:NjgwZWViMmJiNmIwMDM5YzI3OTg1ZDczM2VlMjllZjMwNWNiYzI3N2I5YjU2ODZm6Vpb7Q==: 00:19:57.612 01:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.612 01:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:57.612 01:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.612 01:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.612 01:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.612 01:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:57.612 01:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:57.612 01:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:57.870 01:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:19:57.870 01:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:57.870 01:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:57.870 01:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:57.870 01:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:57.870 01:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.870 01:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.870 01:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.870 01:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.870 01:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.870 01:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.870 01:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.807 00:19:58.807 01:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:58.807 01:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:58.807 01:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.065 01:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.065 01:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.065 01:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.065 01:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.065 01:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.065 01:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:59.065 { 00:19:59.065 "cntlid": 45, 00:19:59.065 "qid": 0, 00:19:59.065 "state": "enabled", 00:19:59.065 "thread": "nvmf_tgt_poll_group_000", 00:19:59.065 "listen_address": { 00:19:59.065 "trtype": "TCP", 00:19:59.065 "adrfam": "IPv4", 00:19:59.065 "traddr": "10.0.0.2", 00:19:59.065 "trsvcid": "4420" 00:19:59.065 }, 00:19:59.065 "peer_address": { 00:19:59.065 "trtype": "TCP", 00:19:59.065 "adrfam": "IPv4", 00:19:59.065 "traddr": "10.0.0.1", 00:19:59.065 "trsvcid": "38084" 00:19:59.065 }, 00:19:59.065 "auth": { 00:19:59.065 "state": "completed", 00:19:59.065 "digest": "sha256", 00:19:59.065 "dhgroup": "ffdhe8192" 00:19:59.065 } 00:19:59.065 } 00:19:59.065 ]' 00:19:59.065 01:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:59.065 01:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:59.065 01:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:59.065 01:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:59.065 01:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:59.066 01:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.066 01:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.066 01:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.324 01:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGJjZDc2M2E4ZDI5ZGQ2NDk5ZTJmNzg5OGE5NTMwNWYzZDg4ZWQxMDVkNmY0MmQ0SvADzQ==: --dhchap-ctrl-secret DHHC-1:01:NDFkNjVhYjJmZDBhZTkzY2FiOTA2NGYyZGFmZDE1MjLxfCW9: 00:20:00.261 01:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.261 01:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:00.261 01:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.261 01:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.261 01:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.261 01:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:00.261 01:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:00.261 01:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:00.827 01:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:20:00.827 01:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:00.827 01:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:00.827 01:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:00.827 01:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:00.827 01:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.827 01:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:00.827 01:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.827 01:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.827 01:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.827 01:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:00.827 01:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:01.761 00:20:01.761 01:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:01.761 01:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:01.761 01:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.761 01:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.761 01:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.761 01:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.761 01:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.761 01:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.761 01:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:01.761 { 00:20:01.761 "cntlid": 47, 00:20:01.761 "qid": 0, 00:20:01.761 "state": "enabled", 00:20:01.761 "thread": "nvmf_tgt_poll_group_000", 00:20:01.761 "listen_address": { 00:20:01.761 "trtype": "TCP", 00:20:01.761 "adrfam": "IPv4", 00:20:01.761 "traddr": "10.0.0.2", 00:20:01.761 "trsvcid": "4420" 00:20:01.761 }, 00:20:01.761 "peer_address": { 00:20:01.761 "trtype": "TCP", 00:20:01.761 "adrfam": "IPv4", 00:20:01.761 "traddr": "10.0.0.1", 00:20:01.761 "trsvcid": "54366" 00:20:01.761 }, 00:20:01.761 "auth": { 00:20:01.761 "state": "completed", 00:20:01.761 "digest": "sha256", 00:20:01.761 "dhgroup": "ffdhe8192" 00:20:01.761 } 00:20:01.761 } 00:20:01.761 ]' 00:20:01.761 01:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:01.761 01:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:01.761 01:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:02.018 01:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:02.018 01:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:02.018 01:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.018 01:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.018 01:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.275 01:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NWQ0OTYwYzBkMzg5ZDc5YzhkYTk4OWE2NWRhYmM3MjUyOGQ3YTA0OWI0MmI3ZjhkMTI0OTAwZDljYjcyMzM1Y2lFBjk=: 00:20:03.208 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.208 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:03.208 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.208 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.208 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.208 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:03.208 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:03.208 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:03.208 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:03.208 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:03.464 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:20:03.464 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:03.464 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:03.464 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:03.464 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:03.464 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.464 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.464 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.464 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.465 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.465 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.465 01:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.722 00:20:03.722 01:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:03.722 01:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:03.722 01:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.979 01:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.979 01:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.979 01:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.979 01:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.980 01:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.980 01:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:03.980 { 00:20:03.980 "cntlid": 49, 00:20:03.980 "qid": 0, 00:20:03.980 "state": "enabled", 00:20:03.980 "thread": "nvmf_tgt_poll_group_000", 00:20:03.980 "listen_address": { 00:20:03.980 "trtype": "TCP", 00:20:03.980 "adrfam": "IPv4", 00:20:03.980 "traddr": "10.0.0.2", 00:20:03.980 "trsvcid": "4420" 00:20:03.980 }, 00:20:03.980 "peer_address": { 00:20:03.980 "trtype": "TCP", 00:20:03.980 "adrfam": "IPv4", 00:20:03.980 "traddr": "10.0.0.1", 00:20:03.980 "trsvcid": "54398" 00:20:03.980 }, 00:20:03.980 "auth": { 00:20:03.980 "state": "completed", 00:20:03.980 "digest": "sha384", 00:20:03.980 "dhgroup": "null" 00:20:03.980 } 00:20:03.980 } 00:20:03.980 ]' 00:20:03.980 01:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:03.980 01:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:03.980 01:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:03.980 01:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:03.980 01:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:04.237 01:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.237 01:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.237 01:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.496 01:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkwNmQ2NjQzMGM3MjFjMTBmY2Q1Yjk0NmZjNmZhMWIxNzg1MDc4OTNjYzI2N2UygDroDA==: --dhchap-ctrl-secret DHHC-1:03:ZTZlYWY3MzM1MDBhMWZhZjljMDgxNDczMTNlZDg0YjQzYWQ3NGUwNDMxOGJhMTc0YTZlN2Y2ZDA3ZmJkYzg1Y7MKYmg=: 00:20:05.432 01:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.432 01:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:05.432 01:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.432 01:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.432 01:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.432 01:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:05.433 01:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:05.433 01:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:05.690 01:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:20:05.690 01:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:05.690 01:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:05.690 01:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:05.690 01:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:05.690 01:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.690 01:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.690 01:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.690 01:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.690 01:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.690 01:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.690 01:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.948 00:20:05.948 01:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:05.948 01:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:05.948 01:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.206 01:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.206 01:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.206 01:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.206 01:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.206 01:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.206 01:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:06.206 { 00:20:06.206 "cntlid": 51, 00:20:06.206 "qid": 0, 00:20:06.206 "state": "enabled", 00:20:06.206 "thread": "nvmf_tgt_poll_group_000", 00:20:06.206 "listen_address": { 00:20:06.206 "trtype": "TCP", 00:20:06.206 "adrfam": "IPv4", 00:20:06.206 "traddr": "10.0.0.2", 00:20:06.206 "trsvcid": "4420" 00:20:06.206 }, 00:20:06.206 "peer_address": { 00:20:06.206 "trtype": "TCP", 00:20:06.206 "adrfam": "IPv4", 00:20:06.206 "traddr": "10.0.0.1", 00:20:06.206 "trsvcid": "54424" 00:20:06.206 }, 00:20:06.206 "auth": { 00:20:06.206 "state": "completed", 00:20:06.206 "digest": "sha384", 00:20:06.206 "dhgroup": "null" 00:20:06.206 } 00:20:06.206 } 00:20:06.206 ]' 00:20:06.206 01:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:06.206 01:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:06.206 01:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:06.206 01:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:06.206 01:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:06.206 01:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.206 01:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.206 01:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.465 01:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZWU5YjkxMGQzZDI4NzAxMzYyZGUyZGE0YzFhMjQ1MWabzSDy: --dhchap-ctrl-secret DHHC-1:02:NjgwZWViMmJiNmIwMDM5YzI3OTg1ZDczM2VlMjllZjMwNWNiYzI3N2I5YjU2ODZm6Vpb7Q==: 00:20:07.402 01:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.402 01:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:07.402 01:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.402 01:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.402 01:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.402 01:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:07.402 01:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:07.402 01:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:07.660 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:20:07.660 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:07.660 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:07.660 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:07.660 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:07.660 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.660 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.660 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.660 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.660 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.660 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.660 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.228 00:20:08.228 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:08.228 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:08.228 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.228 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.228 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.228 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.228 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.228 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.228 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:08.228 { 00:20:08.228 "cntlid": 53, 00:20:08.228 "qid": 0, 00:20:08.228 "state": "enabled", 00:20:08.228 "thread": "nvmf_tgt_poll_group_000", 00:20:08.228 "listen_address": { 00:20:08.228 "trtype": "TCP", 00:20:08.228 "adrfam": "IPv4", 00:20:08.228 "traddr": "10.0.0.2", 00:20:08.228 "trsvcid": "4420" 00:20:08.228 }, 00:20:08.228 "peer_address": { 00:20:08.228 "trtype": "TCP", 00:20:08.228 "adrfam": "IPv4", 00:20:08.228 "traddr": "10.0.0.1", 00:20:08.228 "trsvcid": "54448" 00:20:08.228 }, 00:20:08.228 "auth": { 00:20:08.228 "state": "completed", 00:20:08.228 "digest": "sha384", 00:20:08.228 "dhgroup": "null" 00:20:08.228 } 00:20:08.228 } 00:20:08.228 ]' 00:20:08.228 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:08.485 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:08.485 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:08.485 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:08.485 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:08.485 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.485 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.485 01:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.743 01:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGJjZDc2M2E4ZDI5ZGQ2NDk5ZTJmNzg5OGE5NTMwNWYzZDg4ZWQxMDVkNmY0MmQ0SvADzQ==: --dhchap-ctrl-secret DHHC-1:01:NDFkNjVhYjJmZDBhZTkzY2FiOTA2NGYyZGFmZDE1MjLxfCW9: 00:20:09.678 01:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.678 01:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.678 01:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.678 01:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.678 01:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.678 01:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:09.678 01:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:09.678 01:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:09.936 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:20:09.936 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:09.936 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:09.936 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:09.936 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:09.936 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.936 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:09.936 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.936 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.936 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.936 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:09.936 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:10.196 00:20:10.455 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:10.455 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:10.455 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.713 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.713 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.713 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.713 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.713 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.713 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:10.713 { 00:20:10.713 "cntlid": 55, 00:20:10.713 "qid": 0, 00:20:10.713 "state": "enabled", 00:20:10.713 "thread": "nvmf_tgt_poll_group_000", 00:20:10.713 "listen_address": { 00:20:10.713 "trtype": "TCP", 00:20:10.713 "adrfam": "IPv4", 00:20:10.713 "traddr": "10.0.0.2", 00:20:10.713 "trsvcid": "4420" 00:20:10.713 }, 00:20:10.713 "peer_address": { 00:20:10.713 "trtype": "TCP", 00:20:10.713 "adrfam": "IPv4", 00:20:10.713 "traddr": "10.0.0.1", 00:20:10.713 "trsvcid": "37332" 00:20:10.713 }, 00:20:10.713 "auth": { 00:20:10.713 "state": "completed", 00:20:10.713 "digest": "sha384", 00:20:10.713 "dhgroup": "null" 00:20:10.713 } 00:20:10.713 } 00:20:10.713 ]' 00:20:10.713 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:10.713 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:10.713 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:10.713 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:10.713 01:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:10.713 01:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.713 01:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.713 01:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.972 01:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NWQ0OTYwYzBkMzg5ZDc5YzhkYTk4OWE2NWRhYmM3MjUyOGQ3YTA0OWI0MmI3ZjhkMTI0OTAwZDljYjcyMzM1Y2lFBjk=: 00:20:11.907 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.907 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.907 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.907 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.907 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.165 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:12.165 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:12.165 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:12.165 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:12.165 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:20:12.165 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:12.165 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:12.165 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:12.165 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:12.165 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.165 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.165 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.165 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.423 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.423 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.423 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.709 00:20:12.709 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:12.709 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.709 01:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:12.967 01:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.967 01:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.967 01:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.967 01:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.967 01:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.967 01:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:12.967 { 00:20:12.967 "cntlid": 57, 00:20:12.967 "qid": 0, 00:20:12.967 "state": "enabled", 00:20:12.967 "thread": "nvmf_tgt_poll_group_000", 00:20:12.967 "listen_address": { 00:20:12.967 "trtype": "TCP", 00:20:12.967 "adrfam": "IPv4", 00:20:12.967 "traddr": "10.0.0.2", 00:20:12.967 "trsvcid": "4420" 00:20:12.967 }, 00:20:12.967 "peer_address": { 00:20:12.967 "trtype": "TCP", 00:20:12.967 "adrfam": "IPv4", 00:20:12.967 "traddr": "10.0.0.1", 00:20:12.967 "trsvcid": "37360" 00:20:12.967 }, 00:20:12.967 "auth": { 00:20:12.967 "state": "completed", 00:20:12.967 "digest": "sha384", 00:20:12.967 "dhgroup": "ffdhe2048" 00:20:12.967 } 00:20:12.967 } 00:20:12.967 ]' 00:20:12.967 01:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:12.967 01:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:12.967 01:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:12.967 01:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:12.967 01:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:12.967 01:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.967 01:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.967 01:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.223 01:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkwNmQ2NjQzMGM3MjFjMTBmY2Q1Yjk0NmZjNmZhMWIxNzg1MDc4OTNjYzI2N2UygDroDA==: --dhchap-ctrl-secret DHHC-1:03:ZTZlYWY3MzM1MDBhMWZhZjljMDgxNDczMTNlZDg0YjQzYWQ3NGUwNDMxOGJhMTc0YTZlN2Y2ZDA3ZmJkYzg1Y7MKYmg=: 00:20:14.155 01:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.155 01:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.155 01:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.155 01:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.155 01:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.155 01:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:14.155 01:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:14.155 01:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:14.412 01:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:20:14.413 01:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:14.413 01:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:14.413 01:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:14.413 01:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:14.413 01:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.413 01:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.413 01:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.413 01:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.413 01:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.413 01:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.413 01:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.670 00:20:14.670 01:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:14.670 01:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:14.670 01:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.927 01:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.927 01:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.927 01:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.927 01:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.927 01:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.927 01:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:14.927 { 00:20:14.927 "cntlid": 59, 00:20:14.927 "qid": 0, 00:20:14.927 "state": "enabled", 00:20:14.927 "thread": "nvmf_tgt_poll_group_000", 00:20:14.927 "listen_address": { 00:20:14.927 "trtype": "TCP", 00:20:14.927 "adrfam": "IPv4", 00:20:14.927 "traddr": "10.0.0.2", 00:20:14.927 "trsvcid": "4420" 00:20:14.927 }, 00:20:14.927 "peer_address": { 00:20:14.927 "trtype": "TCP", 00:20:14.927 "adrfam": "IPv4", 00:20:14.927 "traddr": "10.0.0.1", 00:20:14.927 "trsvcid": "37378" 00:20:14.927 }, 00:20:14.927 "auth": { 00:20:14.927 "state": "completed", 00:20:14.927 "digest": "sha384", 00:20:14.927 "dhgroup": "ffdhe2048" 00:20:14.927 } 00:20:14.927 } 00:20:14.927 ]' 00:20:14.927 01:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:15.185 01:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:15.185 01:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:15.185 01:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:15.185 01:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:15.185 01:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.185 01:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.185 01:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.442 01:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZWU5YjkxMGQzZDI4NzAxMzYyZGUyZGE0YzFhMjQ1MWabzSDy: --dhchap-ctrl-secret DHHC-1:02:NjgwZWViMmJiNmIwMDM5YzI3OTg1ZDczM2VlMjllZjMwNWNiYzI3N2I5YjU2ODZm6Vpb7Q==: 00:20:16.378 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.378 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.378 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.378 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.378 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.378 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:16.378 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:16.378 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:16.636 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:20:16.636 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:16.637 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:16.637 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:16.637 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:16.637 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.637 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.637 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.637 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.637 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.637 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.637 01:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.896 00:20:17.156 01:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:17.156 01:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:17.156 01:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.156 01:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.156 01:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.156 01:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.156 01:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.414 01:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.414 01:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:17.414 { 00:20:17.414 "cntlid": 61, 00:20:17.414 "qid": 0, 00:20:17.414 "state": "enabled", 00:20:17.414 "thread": "nvmf_tgt_poll_group_000", 00:20:17.414 "listen_address": { 00:20:17.414 "trtype": "TCP", 00:20:17.414 "adrfam": "IPv4", 00:20:17.414 "traddr": "10.0.0.2", 00:20:17.414 "trsvcid": "4420" 00:20:17.414 }, 00:20:17.414 "peer_address": { 00:20:17.414 "trtype": "TCP", 00:20:17.414 "adrfam": "IPv4", 00:20:17.414 "traddr": "10.0.0.1", 00:20:17.414 "trsvcid": "37398" 00:20:17.414 }, 00:20:17.414 "auth": { 00:20:17.414 "state": "completed", 00:20:17.414 "digest": "sha384", 00:20:17.414 "dhgroup": "ffdhe2048" 00:20:17.414 } 00:20:17.414 } 00:20:17.414 ]' 00:20:17.414 01:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:17.414 01:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:17.414 01:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:17.414 01:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:17.414 01:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:17.414 01:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.414 01:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.414 01:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.672 01:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGJjZDc2M2E4ZDI5ZGQ2NDk5ZTJmNzg5OGE5NTMwNWYzZDg4ZWQxMDVkNmY0MmQ0SvADzQ==: --dhchap-ctrl-secret DHHC-1:01:NDFkNjVhYjJmZDBhZTkzY2FiOTA2NGYyZGFmZDE1MjLxfCW9: 00:20:18.607 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.607 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.607 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.607 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.607 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.607 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:18.607 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:18.608 01:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:18.865 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:20:18.865 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:18.865 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:18.865 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:18.865 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:18.865 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.865 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:18.865 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.865 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.865 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.865 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:18.865 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:19.433 00:20:19.433 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:19.433 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:19.433 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.433 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.433 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.433 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.433 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.433 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.433 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:19.433 { 00:20:19.433 "cntlid": 63, 00:20:19.433 "qid": 0, 00:20:19.433 "state": "enabled", 00:20:19.433 "thread": "nvmf_tgt_poll_group_000", 00:20:19.433 "listen_address": { 00:20:19.433 "trtype": "TCP", 00:20:19.433 "adrfam": "IPv4", 00:20:19.433 "traddr": "10.0.0.2", 00:20:19.433 "trsvcid": "4420" 00:20:19.433 }, 00:20:19.433 "peer_address": { 00:20:19.433 "trtype": "TCP", 00:20:19.433 "adrfam": "IPv4", 00:20:19.433 "traddr": "10.0.0.1", 00:20:19.433 "trsvcid": "53526" 00:20:19.433 }, 00:20:19.433 "auth": { 00:20:19.433 "state": "completed", 00:20:19.433 "digest": "sha384", 00:20:19.433 "dhgroup": "ffdhe2048" 00:20:19.433 } 00:20:19.433 } 00:20:19.433 ]' 00:20:19.433 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:19.690 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:19.690 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:19.690 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:19.690 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:19.690 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.690 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.690 01:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.947 01:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NWQ0OTYwYzBkMzg5ZDc5YzhkYTk4OWE2NWRhYmM3MjUyOGQ3YTA0OWI0MmI3ZjhkMTI0OTAwZDljYjcyMzM1Y2lFBjk=: 00:20:20.880 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.880 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:20.880 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.880 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.880 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.880 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:20.880 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:20.880 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:20.880 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:21.138 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:20:21.138 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:21.138 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:21.138 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:21.138 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:21.138 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.138 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.138 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.138 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.138 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.138 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.138 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.396 00:20:21.396 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:21.396 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:21.396 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.654 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.654 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.654 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.654 01:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.654 01:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.654 01:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:21.654 { 00:20:21.654 "cntlid": 65, 00:20:21.654 "qid": 0, 00:20:21.654 "state": "enabled", 00:20:21.654 "thread": "nvmf_tgt_poll_group_000", 00:20:21.654 "listen_address": { 00:20:21.654 "trtype": "TCP", 00:20:21.654 "adrfam": "IPv4", 00:20:21.654 "traddr": "10.0.0.2", 00:20:21.654 "trsvcid": "4420" 00:20:21.654 }, 00:20:21.654 "peer_address": { 00:20:21.654 "trtype": "TCP", 00:20:21.654 "adrfam": "IPv4", 00:20:21.654 "traddr": "10.0.0.1", 00:20:21.654 "trsvcid": "53556" 00:20:21.654 }, 00:20:21.654 "auth": { 00:20:21.654 "state": "completed", 00:20:21.654 "digest": "sha384", 00:20:21.654 "dhgroup": "ffdhe3072" 00:20:21.654 } 00:20:21.654 } 00:20:21.654 ]' 00:20:21.654 01:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:21.654 01:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.654 01:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:21.912 01:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:21.912 01:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:21.912 01:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.912 01:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.912 01:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.169 01:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkwNmQ2NjQzMGM3MjFjMTBmY2Q1Yjk0NmZjNmZhMWIxNzg1MDc4OTNjYzI2N2UygDroDA==: --dhchap-ctrl-secret DHHC-1:03:ZTZlYWY3MzM1MDBhMWZhZjljMDgxNDczMTNlZDg0YjQzYWQ3NGUwNDMxOGJhMTc0YTZlN2Y2ZDA3ZmJkYzg1Y7MKYmg=: 00:20:23.104 01:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.104 01:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:23.104 01:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.104 01:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.104 01:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.104 01:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:23.104 01:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:23.104 01:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:23.360 01:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:20:23.360 01:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:23.360 01:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:23.360 01:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:23.360 01:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:23.360 01:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.360 01:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.360 01:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.360 01:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.360 01:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.360 01:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.360 01:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.618 00:20:23.618 01:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.618 01:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.618 01:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.875 01:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.875 01:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.875 01:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.875 01:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.875 01:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.875 01:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:23.875 { 00:20:23.875 "cntlid": 67, 00:20:23.875 "qid": 0, 00:20:23.875 "state": "enabled", 00:20:23.875 "thread": "nvmf_tgt_poll_group_000", 00:20:23.875 "listen_address": { 00:20:23.875 "trtype": "TCP", 00:20:23.875 "adrfam": "IPv4", 00:20:23.875 "traddr": "10.0.0.2", 00:20:23.875 "trsvcid": "4420" 00:20:23.875 }, 00:20:23.875 "peer_address": { 00:20:23.875 "trtype": "TCP", 00:20:23.875 "adrfam": "IPv4", 00:20:23.875 "traddr": "10.0.0.1", 00:20:23.875 "trsvcid": "53576" 00:20:23.875 }, 00:20:23.875 "auth": { 00:20:23.875 "state": "completed", 00:20:23.875 "digest": "sha384", 00:20:23.875 "dhgroup": "ffdhe3072" 00:20:23.875 } 00:20:23.875 } 00:20:23.875 ]' 00:20:23.875 01:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:23.875 01:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:23.875 01:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:23.875 01:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:23.875 01:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:23.875 01:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.875 01:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.875 01:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.133 01:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZWU5YjkxMGQzZDI4NzAxMzYyZGUyZGE0YzFhMjQ1MWabzSDy: --dhchap-ctrl-secret DHHC-1:02:NjgwZWViMmJiNmIwMDM5YzI3OTg1ZDczM2VlMjllZjMwNWNiYzI3N2I5YjU2ODZm6Vpb7Q==: 00:20:25.064 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.064 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:25.064 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.064 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.064 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.064 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:25.064 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:25.064 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:25.322 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:20:25.322 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:25.322 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:25.322 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:25.322 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:25.322 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.322 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.322 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.322 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.322 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.322 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.322 01:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.886 00:20:25.886 01:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:25.886 01:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:25.886 01:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.143 01:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.143 01:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.143 01:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.143 01:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.143 01:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.143 01:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:26.144 { 00:20:26.144 "cntlid": 69, 00:20:26.144 "qid": 0, 00:20:26.144 "state": "enabled", 00:20:26.144 "thread": "nvmf_tgt_poll_group_000", 00:20:26.144 "listen_address": { 00:20:26.144 "trtype": "TCP", 00:20:26.144 "adrfam": "IPv4", 00:20:26.144 "traddr": "10.0.0.2", 00:20:26.144 "trsvcid": "4420" 00:20:26.144 }, 00:20:26.144 "peer_address": { 00:20:26.144 "trtype": "TCP", 00:20:26.144 "adrfam": "IPv4", 00:20:26.144 "traddr": "10.0.0.1", 00:20:26.144 "trsvcid": "53588" 00:20:26.144 }, 00:20:26.144 "auth": { 00:20:26.144 "state": "completed", 00:20:26.144 "digest": "sha384", 00:20:26.144 "dhgroup": "ffdhe3072" 00:20:26.144 } 00:20:26.144 } 00:20:26.144 ]' 00:20:26.144 01:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:26.144 01:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:26.144 01:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:26.144 01:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:26.144 01:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:26.144 01:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.144 01:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.144 01:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.401 01:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGJjZDc2M2E4ZDI5ZGQ2NDk5ZTJmNzg5OGE5NTMwNWYzZDg4ZWQxMDVkNmY0MmQ0SvADzQ==: --dhchap-ctrl-secret DHHC-1:01:NDFkNjVhYjJmZDBhZTkzY2FiOTA2NGYyZGFmZDE1MjLxfCW9: 00:20:27.333 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.333 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:27.333 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.333 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.333 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.333 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:27.333 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:27.333 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:27.590 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:20:27.590 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:27.590 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:27.590 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:27.590 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:27.590 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.590 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:27.590 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.590 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.590 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.590 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:27.590 01:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:28.155 00:20:28.155 01:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:28.155 01:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:28.155 01:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.155 01:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.155 01:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.155 01:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.155 01:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.155 01:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.155 01:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:28.155 { 00:20:28.155 "cntlid": 71, 00:20:28.155 "qid": 0, 00:20:28.155 "state": "enabled", 00:20:28.155 "thread": "nvmf_tgt_poll_group_000", 00:20:28.155 "listen_address": { 00:20:28.155 "trtype": "TCP", 00:20:28.155 "adrfam": "IPv4", 00:20:28.155 "traddr": "10.0.0.2", 00:20:28.155 "trsvcid": "4420" 00:20:28.155 }, 00:20:28.155 "peer_address": { 00:20:28.155 "trtype": "TCP", 00:20:28.155 "adrfam": "IPv4", 00:20:28.155 "traddr": "10.0.0.1", 00:20:28.155 "trsvcid": "53626" 00:20:28.155 }, 00:20:28.155 "auth": { 00:20:28.155 "state": "completed", 00:20:28.155 "digest": "sha384", 00:20:28.155 "dhgroup": "ffdhe3072" 00:20:28.155 } 00:20:28.155 } 00:20:28.155 ]' 00:20:28.155 01:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:28.413 01:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:28.413 01:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:28.413 01:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:28.413 01:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:28.413 01:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.413 01:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.413 01:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.672 01:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NWQ0OTYwYzBkMzg5ZDc5YzhkYTk4OWE2NWRhYmM3MjUyOGQ3YTA0OWI0MmI3ZjhkMTI0OTAwZDljYjcyMzM1Y2lFBjk=: 00:20:29.639 01:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.639 01:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:29.639 01:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.639 01:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.639 01:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.639 01:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:29.639 01:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:29.639 01:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:29.639 01:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:29.896 01:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:20:29.896 01:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:29.896 01:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:29.896 01:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:29.896 01:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:29.896 01:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.896 01:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.896 01:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.896 01:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.896 01:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.896 01:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.896 01:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.153 00:20:30.410 01:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:30.410 01:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:30.410 01:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.667 01:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.667 01:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.667 01:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.667 01:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.667 01:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.667 01:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:30.667 { 00:20:30.667 "cntlid": 73, 00:20:30.667 "qid": 0, 00:20:30.667 "state": "enabled", 00:20:30.667 "thread": "nvmf_tgt_poll_group_000", 00:20:30.667 "listen_address": { 00:20:30.667 "trtype": "TCP", 00:20:30.667 "adrfam": "IPv4", 00:20:30.667 "traddr": "10.0.0.2", 00:20:30.667 "trsvcid": "4420" 00:20:30.667 }, 00:20:30.667 "peer_address": { 00:20:30.667 "trtype": "TCP", 00:20:30.667 "adrfam": "IPv4", 00:20:30.667 "traddr": "10.0.0.1", 00:20:30.667 "trsvcid": "51474" 00:20:30.667 }, 00:20:30.667 "auth": { 00:20:30.667 "state": "completed", 00:20:30.667 "digest": "sha384", 00:20:30.667 "dhgroup": "ffdhe4096" 00:20:30.667 } 00:20:30.667 } 00:20:30.667 ]' 00:20:30.667 01:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:30.667 01:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:30.667 01:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:30.667 01:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:30.667 01:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:30.667 01:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.667 01:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.667 01:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.925 01:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkwNmQ2NjQzMGM3MjFjMTBmY2Q1Yjk0NmZjNmZhMWIxNzg1MDc4OTNjYzI2N2UygDroDA==: --dhchap-ctrl-secret DHHC-1:03:ZTZlYWY3MzM1MDBhMWZhZjljMDgxNDczMTNlZDg0YjQzYWQ3NGUwNDMxOGJhMTc0YTZlN2Y2ZDA3ZmJkYzg1Y7MKYmg=: 00:20:31.857 01:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.857 01:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.857 01:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.857 01:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.857 01:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.857 01:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:31.857 01:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:31.857 01:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:32.115 01:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:20:32.115 01:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:32.115 01:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:32.115 01:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:32.115 01:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:32.115 01:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.115 01:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.115 01:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.115 01:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.115 01:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.115 01:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.115 01:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.681 00:20:32.681 01:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:32.681 01:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:32.681 01:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.939 01:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.939 01:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.939 01:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.939 01:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.939 01:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.939 01:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:32.939 { 00:20:32.939 "cntlid": 75, 00:20:32.939 "qid": 0, 00:20:32.939 "state": "enabled", 00:20:32.939 "thread": "nvmf_tgt_poll_group_000", 00:20:32.939 "listen_address": { 00:20:32.939 "trtype": "TCP", 00:20:32.939 "adrfam": "IPv4", 00:20:32.939 "traddr": "10.0.0.2", 00:20:32.939 "trsvcid": "4420" 00:20:32.939 }, 00:20:32.939 "peer_address": { 00:20:32.939 "trtype": "TCP", 00:20:32.939 "adrfam": "IPv4", 00:20:32.939 "traddr": "10.0.0.1", 00:20:32.939 "trsvcid": "51502" 00:20:32.939 }, 00:20:32.939 "auth": { 00:20:32.939 "state": "completed", 00:20:32.939 "digest": "sha384", 00:20:32.939 "dhgroup": "ffdhe4096" 00:20:32.939 } 00:20:32.939 } 00:20:32.939 ]' 00:20:32.939 01:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:32.939 01:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.939 01:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:32.939 01:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:32.939 01:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:32.939 01:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.939 01:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.939 01:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.196 01:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZWU5YjkxMGQzZDI4NzAxMzYyZGUyZGE0YzFhMjQ1MWabzSDy: --dhchap-ctrl-secret DHHC-1:02:NjgwZWViMmJiNmIwMDM5YzI3OTg1ZDczM2VlMjllZjMwNWNiYzI3N2I5YjU2ODZm6Vpb7Q==: 00:20:34.127 01:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.127 01:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.127 01:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.127 01:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.127 01:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.127 01:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:34.127 01:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:34.127 01:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:34.385 01:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:20:34.385 01:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:34.385 01:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:34.385 01:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:34.385 01:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:34.385 01:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.385 01:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.385 01:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.385 01:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.385 01:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.385 01:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.385 01:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.951 00:20:34.951 01:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:34.951 01:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:34.951 01:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.209 01:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.209 01:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.209 01:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.209 01:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.209 01:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.209 01:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:35.209 { 00:20:35.209 "cntlid": 77, 00:20:35.209 "qid": 0, 00:20:35.209 "state": "enabled", 00:20:35.209 "thread": "nvmf_tgt_poll_group_000", 00:20:35.209 "listen_address": { 00:20:35.209 "trtype": "TCP", 00:20:35.209 "adrfam": "IPv4", 00:20:35.209 "traddr": "10.0.0.2", 00:20:35.209 "trsvcid": "4420" 00:20:35.209 }, 00:20:35.209 "peer_address": { 00:20:35.209 "trtype": "TCP", 00:20:35.209 "adrfam": "IPv4", 00:20:35.209 "traddr": "10.0.0.1", 00:20:35.209 "trsvcid": "51522" 00:20:35.209 }, 00:20:35.209 "auth": { 00:20:35.209 "state": "completed", 00:20:35.209 "digest": "sha384", 00:20:35.209 "dhgroup": "ffdhe4096" 00:20:35.209 } 00:20:35.209 } 00:20:35.209 ]' 00:20:35.209 01:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:35.209 01:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:35.209 01:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:35.209 01:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:35.209 01:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:35.209 01:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.209 01:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.209 01:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.467 01:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGJjZDc2M2E4ZDI5ZGQ2NDk5ZTJmNzg5OGE5NTMwNWYzZDg4ZWQxMDVkNmY0MmQ0SvADzQ==: --dhchap-ctrl-secret DHHC-1:01:NDFkNjVhYjJmZDBhZTkzY2FiOTA2NGYyZGFmZDE1MjLxfCW9: 00:20:36.400 01:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.400 01:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:36.400 01:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.400 01:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.400 01:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.400 01:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:36.400 01:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:36.400 01:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:36.658 01:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:20:36.658 01:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:36.658 01:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:36.658 01:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:36.658 01:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:36.658 01:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.658 01:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:36.658 01:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.658 01:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.658 01:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.658 01:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:36.658 01:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:37.224 00:20:37.224 01:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:37.224 01:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:37.224 01:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.482 01:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.482 01:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.482 01:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.482 01:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.482 01:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.482 01:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:37.482 { 00:20:37.482 "cntlid": 79, 00:20:37.482 "qid": 0, 00:20:37.482 "state": "enabled", 00:20:37.482 "thread": "nvmf_tgt_poll_group_000", 00:20:37.482 "listen_address": { 00:20:37.482 "trtype": "TCP", 00:20:37.482 "adrfam": "IPv4", 00:20:37.482 "traddr": "10.0.0.2", 00:20:37.482 "trsvcid": "4420" 00:20:37.482 }, 00:20:37.482 "peer_address": { 00:20:37.482 "trtype": "TCP", 00:20:37.482 "adrfam": "IPv4", 00:20:37.482 "traddr": "10.0.0.1", 00:20:37.482 "trsvcid": "51536" 00:20:37.482 }, 00:20:37.482 "auth": { 00:20:37.482 "state": "completed", 00:20:37.482 "digest": "sha384", 00:20:37.482 "dhgroup": "ffdhe4096" 00:20:37.482 } 00:20:37.482 } 00:20:37.482 ]' 00:20:37.482 01:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:37.482 01:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.482 01:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:37.483 01:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:37.483 01:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:37.483 01:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.483 01:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.483 01:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.740 01:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NWQ0OTYwYzBkMzg5ZDc5YzhkYTk4OWE2NWRhYmM3MjUyOGQ3YTA0OWI0MmI3ZjhkMTI0OTAwZDljYjcyMzM1Y2lFBjk=: 00:20:38.674 01:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.674 01:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:38.674 01:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.674 01:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.674 01:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.674 01:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:38.674 01:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:38.674 01:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:38.674 01:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:38.932 01:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:20:38.932 01:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:38.932 01:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:38.932 01:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:38.932 01:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:38.932 01:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.932 01:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.932 01:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.932 01:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.932 01:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.932 01:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.932 01:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.498 00:20:39.498 01:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:39.498 01:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:39.498 01:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.757 01:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.757 01:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.757 01:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.757 01:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.757 01:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.757 01:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:39.757 { 00:20:39.757 "cntlid": 81, 00:20:39.757 "qid": 0, 00:20:39.757 "state": "enabled", 00:20:39.757 "thread": "nvmf_tgt_poll_group_000", 00:20:39.757 "listen_address": { 00:20:39.757 "trtype": "TCP", 00:20:39.757 "adrfam": "IPv4", 00:20:39.757 "traddr": "10.0.0.2", 00:20:39.757 "trsvcid": "4420" 00:20:39.757 }, 00:20:39.757 "peer_address": { 00:20:39.757 "trtype": "TCP", 00:20:39.757 "adrfam": "IPv4", 00:20:39.757 "traddr": "10.0.0.1", 00:20:39.757 "trsvcid": "33428" 00:20:39.757 }, 00:20:39.757 "auth": { 00:20:39.757 "state": "completed", 00:20:39.757 "digest": "sha384", 00:20:39.757 "dhgroup": "ffdhe6144" 00:20:39.757 } 00:20:39.757 } 00:20:39.757 ]' 00:20:39.757 01:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:39.757 01:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:40.015 01:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:40.015 01:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:40.015 01:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:40.015 01:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.015 01:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.015 01:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.273 01:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkwNmQ2NjQzMGM3MjFjMTBmY2Q1Yjk0NmZjNmZhMWIxNzg1MDc4OTNjYzI2N2UygDroDA==: --dhchap-ctrl-secret DHHC-1:03:ZTZlYWY3MzM1MDBhMWZhZjljMDgxNDczMTNlZDg0YjQzYWQ3NGUwNDMxOGJhMTc0YTZlN2Y2ZDA3ZmJkYzg1Y7MKYmg=: 00:20:41.207 01:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.207 01:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:41.207 01:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.207 01:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.207 01:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.207 01:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:41.207 01:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:41.207 01:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:41.465 01:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:20:41.465 01:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:41.465 01:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:41.465 01:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:41.465 01:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:41.465 01:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.465 01:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.465 01:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.465 01:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.465 01:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.465 01:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.465 01:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.033 00:20:42.033 01:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:42.033 01:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:42.033 01:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.291 01:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.291 01:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.291 01:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.291 01:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.291 01:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.291 01:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:42.291 { 00:20:42.291 "cntlid": 83, 00:20:42.291 "qid": 0, 00:20:42.291 "state": "enabled", 00:20:42.291 "thread": "nvmf_tgt_poll_group_000", 00:20:42.291 "listen_address": { 00:20:42.291 "trtype": "TCP", 00:20:42.291 "adrfam": "IPv4", 00:20:42.291 "traddr": "10.0.0.2", 00:20:42.291 "trsvcid": "4420" 00:20:42.291 }, 00:20:42.291 "peer_address": { 00:20:42.291 "trtype": "TCP", 00:20:42.291 "adrfam": "IPv4", 00:20:42.291 "traddr": "10.0.0.1", 00:20:42.291 "trsvcid": "33448" 00:20:42.291 }, 00:20:42.291 "auth": { 00:20:42.291 "state": "completed", 00:20:42.291 "digest": "sha384", 00:20:42.291 "dhgroup": "ffdhe6144" 00:20:42.291 } 00:20:42.291 } 00:20:42.291 ]' 00:20:42.291 01:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:42.291 01:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:42.291 01:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:42.291 01:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:42.291 01:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:42.548 01:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.548 01:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.548 01:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.808 01:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZWU5YjkxMGQzZDI4NzAxMzYyZGUyZGE0YzFhMjQ1MWabzSDy: --dhchap-ctrl-secret DHHC-1:02:NjgwZWViMmJiNmIwMDM5YzI3OTg1ZDczM2VlMjllZjMwNWNiYzI3N2I5YjU2ODZm6Vpb7Q==: 00:20:43.744 01:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.744 01:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:43.744 01:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.744 01:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.744 01:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.744 01:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:43.744 01:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:43.744 01:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:44.001 01:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:20:44.001 01:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:44.001 01:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:44.001 01:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:44.001 01:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:44.001 01:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.001 01:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.001 01:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.001 01:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.001 01:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.001 01:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.001 01:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.567 00:20:44.567 01:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:44.567 01:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:44.567 01:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.824 01:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.824 01:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.824 01:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.824 01:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.824 01:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.824 01:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:44.824 { 00:20:44.824 "cntlid": 85, 00:20:44.824 "qid": 0, 00:20:44.824 "state": "enabled", 00:20:44.824 "thread": "nvmf_tgt_poll_group_000", 00:20:44.824 "listen_address": { 00:20:44.824 "trtype": "TCP", 00:20:44.824 "adrfam": "IPv4", 00:20:44.824 "traddr": "10.0.0.2", 00:20:44.824 "trsvcid": "4420" 00:20:44.824 }, 00:20:44.824 "peer_address": { 00:20:44.824 "trtype": "TCP", 00:20:44.824 "adrfam": "IPv4", 00:20:44.824 "traddr": "10.0.0.1", 00:20:44.824 "trsvcid": "33474" 00:20:44.824 }, 00:20:44.824 "auth": { 00:20:44.824 "state": "completed", 00:20:44.824 "digest": "sha384", 00:20:44.824 "dhgroup": "ffdhe6144" 00:20:44.824 } 00:20:44.824 } 00:20:44.824 ]' 00:20:44.824 01:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:44.824 01:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.824 01:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:44.824 01:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:44.824 01:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:44.824 01:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.824 01:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.824 01:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.082 01:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGJjZDc2M2E4ZDI5ZGQ2NDk5ZTJmNzg5OGE5NTMwNWYzZDg4ZWQxMDVkNmY0MmQ0SvADzQ==: --dhchap-ctrl-secret DHHC-1:01:NDFkNjVhYjJmZDBhZTkzY2FiOTA2NGYyZGFmZDE1MjLxfCW9: 00:20:46.049 01:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.049 01:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:46.049 01:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.049 01:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.049 01:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.049 01:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:46.049 01:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:46.049 01:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:46.307 01:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:20:46.307 01:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:46.307 01:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:46.307 01:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:46.307 01:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:46.307 01:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.307 01:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:46.307 01:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.307 01:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.307 01:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.307 01:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:46.307 01:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:46.874 00:20:46.874 01:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:46.874 01:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:46.874 01:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.132 01:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.132 01:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.132 01:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.132 01:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.132 01:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.132 01:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:47.132 { 00:20:47.132 "cntlid": 87, 00:20:47.132 "qid": 0, 00:20:47.132 "state": "enabled", 00:20:47.132 "thread": "nvmf_tgt_poll_group_000", 00:20:47.132 "listen_address": { 00:20:47.132 "trtype": "TCP", 00:20:47.132 "adrfam": "IPv4", 00:20:47.132 "traddr": "10.0.0.2", 00:20:47.132 "trsvcid": "4420" 00:20:47.132 }, 00:20:47.132 "peer_address": { 00:20:47.132 "trtype": "TCP", 00:20:47.132 "adrfam": "IPv4", 00:20:47.132 "traddr": "10.0.0.1", 00:20:47.132 "trsvcid": "33498" 00:20:47.132 }, 00:20:47.132 "auth": { 00:20:47.132 "state": "completed", 00:20:47.132 "digest": "sha384", 00:20:47.132 "dhgroup": "ffdhe6144" 00:20:47.132 } 00:20:47.132 } 00:20:47.132 ]' 00:20:47.132 01:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:47.132 01:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.132 01:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:47.390 01:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:47.390 01:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:47.390 01:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.390 01:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.390 01:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.647 01:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NWQ0OTYwYzBkMzg5ZDc5YzhkYTk4OWE2NWRhYmM3MjUyOGQ3YTA0OWI0MmI3ZjhkMTI0OTAwZDljYjcyMzM1Y2lFBjk=: 00:20:48.585 01:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.585 01:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:48.585 01:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.585 01:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.585 01:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.585 01:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:48.585 01:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:48.585 01:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:48.586 01:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:48.843 01:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:20:48.843 01:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:48.843 01:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:48.843 01:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:48.843 01:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:48.843 01:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.843 01:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.843 01:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.843 01:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.843 01:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.844 01:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.844 01:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.778 00:20:49.778 01:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:49.778 01:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:49.778 01:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.064 01:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.064 01:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.064 01:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.064 01:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.064 01:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.064 01:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:50.064 { 00:20:50.064 "cntlid": 89, 00:20:50.064 "qid": 0, 00:20:50.064 "state": "enabled", 00:20:50.064 "thread": "nvmf_tgt_poll_group_000", 00:20:50.064 "listen_address": { 00:20:50.064 "trtype": "TCP", 00:20:50.064 "adrfam": "IPv4", 00:20:50.064 "traddr": "10.0.0.2", 00:20:50.064 "trsvcid": "4420" 00:20:50.064 }, 00:20:50.064 "peer_address": { 00:20:50.064 "trtype": "TCP", 00:20:50.064 "adrfam": "IPv4", 00:20:50.064 "traddr": "10.0.0.1", 00:20:50.064 "trsvcid": "46452" 00:20:50.064 }, 00:20:50.064 "auth": { 00:20:50.064 "state": "completed", 00:20:50.064 "digest": "sha384", 00:20:50.064 "dhgroup": "ffdhe8192" 00:20:50.064 } 00:20:50.064 } 00:20:50.064 ]' 00:20:50.064 01:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:50.064 01:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.064 01:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:50.064 01:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:50.064 01:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:50.064 01:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.064 01:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.064 01:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.322 01:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkwNmQ2NjQzMGM3MjFjMTBmY2Q1Yjk0NmZjNmZhMWIxNzg1MDc4OTNjYzI2N2UygDroDA==: --dhchap-ctrl-secret DHHC-1:03:ZTZlYWY3MzM1MDBhMWZhZjljMDgxNDczMTNlZDg0YjQzYWQ3NGUwNDMxOGJhMTc0YTZlN2Y2ZDA3ZmJkYzg1Y7MKYmg=: 00:20:51.255 01:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.255 01:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.255 01:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.255 01:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.255 01:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.255 01:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:51.255 01:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:51.255 01:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:51.513 01:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:20:51.513 01:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:51.513 01:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:51.513 01:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:51.513 01:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:51.513 01:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.513 01:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.513 01:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.513 01:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.513 01:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.513 01:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.513 01:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.449 00:20:52.449 01:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:52.449 01:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:52.449 01:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.707 01:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.707 01:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.707 01:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.707 01:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.707 01:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.707 01:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:52.707 { 00:20:52.707 "cntlid": 91, 00:20:52.707 "qid": 0, 00:20:52.707 "state": "enabled", 00:20:52.707 "thread": "nvmf_tgt_poll_group_000", 00:20:52.707 "listen_address": { 00:20:52.707 "trtype": "TCP", 00:20:52.707 "adrfam": "IPv4", 00:20:52.707 "traddr": "10.0.0.2", 00:20:52.707 "trsvcid": "4420" 00:20:52.707 }, 00:20:52.707 "peer_address": { 00:20:52.707 "trtype": "TCP", 00:20:52.707 "adrfam": "IPv4", 00:20:52.707 "traddr": "10.0.0.1", 00:20:52.707 "trsvcid": "46488" 00:20:52.707 }, 00:20:52.707 "auth": { 00:20:52.707 "state": "completed", 00:20:52.707 "digest": "sha384", 00:20:52.707 "dhgroup": "ffdhe8192" 00:20:52.707 } 00:20:52.707 } 00:20:52.707 ]' 00:20:52.707 01:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:52.707 01:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:52.707 01:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:52.708 01:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:52.708 01:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:52.966 01:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.966 01:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.966 01:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.225 01:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZWU5YjkxMGQzZDI4NzAxMzYyZGUyZGE0YzFhMjQ1MWabzSDy: --dhchap-ctrl-secret DHHC-1:02:NjgwZWViMmJiNmIwMDM5YzI3OTg1ZDczM2VlMjllZjMwNWNiYzI3N2I5YjU2ODZm6Vpb7Q==: 00:20:54.157 01:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.157 01:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:54.157 01:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.157 01:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.157 01:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.157 01:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:54.157 01:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:54.157 01:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:54.415 01:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:20:54.415 01:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:54.415 01:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:54.415 01:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:54.415 01:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:54.415 01:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.415 01:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.415 01:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.415 01:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.415 01:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.415 01:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.415 01:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.353 00:20:55.353 01:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:55.353 01:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:55.353 01:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.353 01:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.353 01:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.353 01:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.353 01:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.353 01:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.353 01:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:55.353 { 00:20:55.353 "cntlid": 93, 00:20:55.353 "qid": 0, 00:20:55.353 "state": "enabled", 00:20:55.353 "thread": "nvmf_tgt_poll_group_000", 00:20:55.353 "listen_address": { 00:20:55.353 "trtype": "TCP", 00:20:55.353 "adrfam": "IPv4", 00:20:55.353 "traddr": "10.0.0.2", 00:20:55.353 "trsvcid": "4420" 00:20:55.353 }, 00:20:55.353 "peer_address": { 00:20:55.353 "trtype": "TCP", 00:20:55.353 "adrfam": "IPv4", 00:20:55.353 "traddr": "10.0.0.1", 00:20:55.353 "trsvcid": "46522" 00:20:55.353 }, 00:20:55.353 "auth": { 00:20:55.353 "state": "completed", 00:20:55.353 "digest": "sha384", 00:20:55.353 "dhgroup": "ffdhe8192" 00:20:55.353 } 00:20:55.353 } 00:20:55.353 ]' 00:20:55.353 01:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:55.611 01:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:55.611 01:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:55.611 01:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:55.611 01:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:55.611 01:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.611 01:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.611 01:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.870 01:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGJjZDc2M2E4ZDI5ZGQ2NDk5ZTJmNzg5OGE5NTMwNWYzZDg4ZWQxMDVkNmY0MmQ0SvADzQ==: --dhchap-ctrl-secret DHHC-1:01:NDFkNjVhYjJmZDBhZTkzY2FiOTA2NGYyZGFmZDE1MjLxfCW9: 00:20:56.805 01:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.805 01:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:56.805 01:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.805 01:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.805 01:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.805 01:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:56.805 01:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:56.805 01:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:57.063 01:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:20:57.063 01:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:57.063 01:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:57.063 01:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:57.063 01:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:57.063 01:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.063 01:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:57.063 01:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.063 01:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.063 01:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.063 01:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:57.063 01:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:57.999 00:20:57.999 01:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:57.999 01:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:57.999 01:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.258 01:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.258 01:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.258 01:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.258 01:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.258 01:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.258 01:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:58.258 { 00:20:58.258 "cntlid": 95, 00:20:58.258 "qid": 0, 00:20:58.258 "state": "enabled", 00:20:58.258 "thread": "nvmf_tgt_poll_group_000", 00:20:58.258 "listen_address": { 00:20:58.258 "trtype": "TCP", 00:20:58.258 "adrfam": "IPv4", 00:20:58.258 "traddr": "10.0.0.2", 00:20:58.258 "trsvcid": "4420" 00:20:58.258 }, 00:20:58.258 "peer_address": { 00:20:58.258 "trtype": "TCP", 00:20:58.258 "adrfam": "IPv4", 00:20:58.258 "traddr": "10.0.0.1", 00:20:58.258 "trsvcid": "46564" 00:20:58.258 }, 00:20:58.258 "auth": { 00:20:58.258 "state": "completed", 00:20:58.258 "digest": "sha384", 00:20:58.258 "dhgroup": "ffdhe8192" 00:20:58.258 } 00:20:58.258 } 00:20:58.258 ]' 00:20:58.258 01:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:58.258 01:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:58.258 01:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:58.258 01:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:58.258 01:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:58.258 01:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.258 01:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.258 01:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.517 01:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NWQ0OTYwYzBkMzg5ZDc5YzhkYTk4OWE2NWRhYmM3MjUyOGQ3YTA0OWI0MmI3ZjhkMTI0OTAwZDljYjcyMzM1Y2lFBjk=: 00:20:59.893 01:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.893 01:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:59.893 01:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.893 01:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.893 01:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.893 01:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:59.893 01:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:59.893 01:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:59.893 01:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:59.893 01:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:59.893 01:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:20:59.893 01:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:59.893 01:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:59.893 01:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:59.893 01:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:59.893 01:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.893 01:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.893 01:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.893 01:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.893 01:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.893 01:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.893 01:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.150 00:21:00.150 01:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:00.150 01:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:00.150 01:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.406 01:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.406 01:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.406 01:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.406 01:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.406 01:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.406 01:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:00.406 { 00:21:00.406 "cntlid": 97, 00:21:00.406 "qid": 0, 00:21:00.406 "state": "enabled", 00:21:00.406 "thread": "nvmf_tgt_poll_group_000", 00:21:00.407 "listen_address": { 00:21:00.407 "trtype": "TCP", 00:21:00.407 "adrfam": "IPv4", 00:21:00.407 "traddr": "10.0.0.2", 00:21:00.407 "trsvcid": "4420" 00:21:00.407 }, 00:21:00.407 "peer_address": { 00:21:00.407 "trtype": "TCP", 00:21:00.407 "adrfam": "IPv4", 00:21:00.407 "traddr": "10.0.0.1", 00:21:00.407 "trsvcid": "33174" 00:21:00.407 }, 00:21:00.407 "auth": { 00:21:00.407 "state": "completed", 00:21:00.407 "digest": "sha512", 00:21:00.407 "dhgroup": "null" 00:21:00.407 } 00:21:00.407 } 00:21:00.407 ]' 00:21:00.407 01:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:00.663 01:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:00.663 01:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:00.663 01:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:00.663 01:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:00.663 01:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.663 01:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.663 01:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.921 01:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkwNmQ2NjQzMGM3MjFjMTBmY2Q1Yjk0NmZjNmZhMWIxNzg1MDc4OTNjYzI2N2UygDroDA==: --dhchap-ctrl-secret DHHC-1:03:ZTZlYWY3MzM1MDBhMWZhZjljMDgxNDczMTNlZDg0YjQzYWQ3NGUwNDMxOGJhMTc0YTZlN2Y2ZDA3ZmJkYzg1Y7MKYmg=: 00:21:01.857 01:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.857 01:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:01.857 01:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.857 01:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.857 01:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.857 01:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:01.857 01:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:01.857 01:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:02.115 01:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:21:02.115 01:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:02.115 01:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:02.115 01:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:02.115 01:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:02.115 01:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.115 01:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.115 01:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.115 01:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.115 01:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.115 01:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.115 01:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.414 00:21:02.414 01:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:02.414 01:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:02.414 01:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.675 01:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.675 01:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.675 01:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.675 01:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.675 01:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.675 01:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:02.675 { 00:21:02.675 "cntlid": 99, 00:21:02.675 "qid": 0, 00:21:02.675 "state": "enabled", 00:21:02.675 "thread": "nvmf_tgt_poll_group_000", 00:21:02.675 "listen_address": { 00:21:02.675 "trtype": "TCP", 00:21:02.675 "adrfam": "IPv4", 00:21:02.675 "traddr": "10.0.0.2", 00:21:02.675 "trsvcid": "4420" 00:21:02.675 }, 00:21:02.675 "peer_address": { 00:21:02.675 "trtype": "TCP", 00:21:02.675 "adrfam": "IPv4", 00:21:02.675 "traddr": "10.0.0.1", 00:21:02.675 "trsvcid": "33208" 00:21:02.675 }, 00:21:02.675 "auth": { 00:21:02.675 "state": "completed", 00:21:02.675 "digest": "sha512", 00:21:02.675 "dhgroup": "null" 00:21:02.675 } 00:21:02.675 } 00:21:02.675 ]' 00:21:02.675 01:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:02.675 01:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:02.675 01:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:02.675 01:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:02.933 01:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:02.933 01:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.933 01:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.933 01:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.190 01:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZWU5YjkxMGQzZDI4NzAxMzYyZGUyZGE0YzFhMjQ1MWabzSDy: --dhchap-ctrl-secret DHHC-1:02:NjgwZWViMmJiNmIwMDM5YzI3OTg1ZDczM2VlMjllZjMwNWNiYzI3N2I5YjU2ODZm6Vpb7Q==: 00:21:04.120 01:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.120 01:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:04.120 01:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.120 01:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.120 01:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.120 01:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:04.120 01:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:04.120 01:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:04.378 01:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:21:04.378 01:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:04.378 01:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:04.378 01:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:04.378 01:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:04.378 01:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.378 01:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.378 01:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.378 01:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.378 01:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.378 01:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.378 01:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.636 00:21:04.636 01:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:04.636 01:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:04.636 01:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.895 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.895 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.895 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.895 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.895 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.895 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:04.895 { 00:21:04.895 "cntlid": 101, 00:21:04.895 "qid": 0, 00:21:04.895 "state": "enabled", 00:21:04.895 "thread": "nvmf_tgt_poll_group_000", 00:21:04.895 "listen_address": { 00:21:04.895 "trtype": "TCP", 00:21:04.895 "adrfam": "IPv4", 00:21:04.895 "traddr": "10.0.0.2", 00:21:04.895 "trsvcid": "4420" 00:21:04.895 }, 00:21:04.895 "peer_address": { 00:21:04.895 "trtype": "TCP", 00:21:04.895 "adrfam": "IPv4", 00:21:04.895 "traddr": "10.0.0.1", 00:21:04.895 "trsvcid": "33244" 00:21:04.895 }, 00:21:04.895 "auth": { 00:21:04.895 "state": "completed", 00:21:04.895 "digest": "sha512", 00:21:04.895 "dhgroup": "null" 00:21:04.895 } 00:21:04.895 } 00:21:04.895 ]' 00:21:04.895 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:04.895 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:04.895 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:04.895 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:04.895 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:04.895 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.895 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.895 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.153 01:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGJjZDc2M2E4ZDI5ZGQ2NDk5ZTJmNzg5OGE5NTMwNWYzZDg4ZWQxMDVkNmY0MmQ0SvADzQ==: --dhchap-ctrl-secret DHHC-1:01:NDFkNjVhYjJmZDBhZTkzY2FiOTA2NGYyZGFmZDE1MjLxfCW9: 00:21:06.088 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.088 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.088 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:06.088 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.088 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.347 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.347 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:06.347 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:06.348 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:06.348 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:21:06.348 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:06.348 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:06.348 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:06.348 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:06.348 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.348 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:06.348 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.348 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.348 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.348 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:06.348 01:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:06.913 00:21:06.913 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:06.913 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.913 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:06.913 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.913 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.913 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.913 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.913 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.913 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:06.913 { 00:21:06.913 "cntlid": 103, 00:21:06.913 "qid": 0, 00:21:06.913 "state": "enabled", 00:21:06.913 "thread": "nvmf_tgt_poll_group_000", 00:21:06.913 "listen_address": { 00:21:06.913 "trtype": "TCP", 00:21:06.913 "adrfam": "IPv4", 00:21:06.913 "traddr": "10.0.0.2", 00:21:06.913 "trsvcid": "4420" 00:21:06.913 }, 00:21:06.913 "peer_address": { 00:21:06.913 "trtype": "TCP", 00:21:06.913 "adrfam": "IPv4", 00:21:06.913 "traddr": "10.0.0.1", 00:21:06.913 "trsvcid": "33274" 00:21:06.913 }, 00:21:06.913 "auth": { 00:21:06.913 "state": "completed", 00:21:06.913 "digest": "sha512", 00:21:06.913 "dhgroup": "null" 00:21:06.913 } 00:21:06.913 } 00:21:06.913 ]' 00:21:06.913 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:07.171 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:07.171 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:07.171 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:07.171 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:07.171 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.171 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.171 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.428 01:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NWQ0OTYwYzBkMzg5ZDc5YzhkYTk4OWE2NWRhYmM3MjUyOGQ3YTA0OWI0MmI3ZjhkMTI0OTAwZDljYjcyMzM1Y2lFBjk=: 00:21:08.361 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.361 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.361 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.361 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.361 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.361 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:08.361 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:08.361 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:08.361 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:08.619 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:21:08.619 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:08.619 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:08.619 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:08.619 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:08.619 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.619 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.619 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.619 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.619 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.619 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.619 01:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.877 00:21:08.877 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:08.877 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:08.877 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.135 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.135 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.135 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.135 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.135 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.135 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:09.135 { 00:21:09.135 "cntlid": 105, 00:21:09.135 "qid": 0, 00:21:09.135 "state": "enabled", 00:21:09.135 "thread": "nvmf_tgt_poll_group_000", 00:21:09.135 "listen_address": { 00:21:09.135 "trtype": "TCP", 00:21:09.135 "adrfam": "IPv4", 00:21:09.135 "traddr": "10.0.0.2", 00:21:09.135 "trsvcid": "4420" 00:21:09.135 }, 00:21:09.135 "peer_address": { 00:21:09.135 "trtype": "TCP", 00:21:09.135 "adrfam": "IPv4", 00:21:09.135 "traddr": "10.0.0.1", 00:21:09.135 "trsvcid": "33310" 00:21:09.135 }, 00:21:09.135 "auth": { 00:21:09.135 "state": "completed", 00:21:09.135 "digest": "sha512", 00:21:09.135 "dhgroup": "ffdhe2048" 00:21:09.135 } 00:21:09.135 } 00:21:09.135 ]' 00:21:09.135 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:09.135 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.135 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:09.393 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:09.393 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:09.393 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.393 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.393 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.651 01:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkwNmQ2NjQzMGM3MjFjMTBmY2Q1Yjk0NmZjNmZhMWIxNzg1MDc4OTNjYzI2N2UygDroDA==: --dhchap-ctrl-secret DHHC-1:03:ZTZlYWY3MzM1MDBhMWZhZjljMDgxNDczMTNlZDg0YjQzYWQ3NGUwNDMxOGJhMTc0YTZlN2Y2ZDA3ZmJkYzg1Y7MKYmg=: 00:21:10.588 01:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.588 01:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.588 01:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.588 01:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.588 01:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.588 01:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:10.588 01:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:10.588 01:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:10.846 01:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:21:10.846 01:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:10.846 01:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:10.846 01:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:10.846 01:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:10.846 01:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.846 01:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.846 01:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.846 01:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.846 01:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.846 01:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.846 01:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.104 00:21:11.104 01:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:11.104 01:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:11.104 01:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.362 01:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.362 01:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.362 01:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.362 01:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.362 01:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.362 01:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:11.362 { 00:21:11.362 "cntlid": 107, 00:21:11.362 "qid": 0, 00:21:11.362 "state": "enabled", 00:21:11.362 "thread": "nvmf_tgt_poll_group_000", 00:21:11.362 "listen_address": { 00:21:11.362 "trtype": "TCP", 00:21:11.362 "adrfam": "IPv4", 00:21:11.362 "traddr": "10.0.0.2", 00:21:11.362 "trsvcid": "4420" 00:21:11.362 }, 00:21:11.362 "peer_address": { 00:21:11.362 "trtype": "TCP", 00:21:11.362 "adrfam": "IPv4", 00:21:11.362 "traddr": "10.0.0.1", 00:21:11.362 "trsvcid": "46034" 00:21:11.362 }, 00:21:11.362 "auth": { 00:21:11.362 "state": "completed", 00:21:11.362 "digest": "sha512", 00:21:11.362 "dhgroup": "ffdhe2048" 00:21:11.362 } 00:21:11.362 } 00:21:11.362 ]' 00:21:11.362 01:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:11.362 01:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:11.362 01:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:11.620 01:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:11.620 01:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:11.620 01:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.620 01:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.620 01:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.878 01:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZWU5YjkxMGQzZDI4NzAxMzYyZGUyZGE0YzFhMjQ1MWabzSDy: --dhchap-ctrl-secret DHHC-1:02:NjgwZWViMmJiNmIwMDM5YzI3OTg1ZDczM2VlMjllZjMwNWNiYzI3N2I5YjU2ODZm6Vpb7Q==: 00:21:12.813 01:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.813 01:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:12.813 01:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.813 01:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.813 01:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.813 01:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:12.813 01:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:12.813 01:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:13.072 01:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:21:13.072 01:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:13.072 01:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:13.072 01:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:13.072 01:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:13.072 01:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.072 01:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.072 01:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.072 01:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.072 01:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.072 01:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.072 01:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.329 00:21:13.329 01:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:13.329 01:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:13.329 01:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.894 01:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.894 01:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.894 01:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.894 01:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.894 01:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.894 01:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:13.894 { 00:21:13.894 "cntlid": 109, 00:21:13.894 "qid": 0, 00:21:13.894 "state": "enabled", 00:21:13.894 "thread": "nvmf_tgt_poll_group_000", 00:21:13.894 "listen_address": { 00:21:13.894 "trtype": "TCP", 00:21:13.894 "adrfam": "IPv4", 00:21:13.894 "traddr": "10.0.0.2", 00:21:13.894 "trsvcid": "4420" 00:21:13.894 }, 00:21:13.894 "peer_address": { 00:21:13.894 "trtype": "TCP", 00:21:13.894 "adrfam": "IPv4", 00:21:13.894 "traddr": "10.0.0.1", 00:21:13.894 "trsvcid": "46064" 00:21:13.894 }, 00:21:13.894 "auth": { 00:21:13.894 "state": "completed", 00:21:13.894 "digest": "sha512", 00:21:13.894 "dhgroup": "ffdhe2048" 00:21:13.894 } 00:21:13.894 } 00:21:13.894 ]' 00:21:13.894 01:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:13.894 01:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:13.894 01:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:13.894 01:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:13.894 01:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:13.894 01:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.894 01:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.894 01:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.152 01:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGJjZDc2M2E4ZDI5ZGQ2NDk5ZTJmNzg5OGE5NTMwNWYzZDg4ZWQxMDVkNmY0MmQ0SvADzQ==: --dhchap-ctrl-secret DHHC-1:01:NDFkNjVhYjJmZDBhZTkzY2FiOTA2NGYyZGFmZDE1MjLxfCW9: 00:21:15.090 01:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.090 01:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.090 01:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.090 01:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.090 01:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.090 01:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:15.090 01:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:15.090 01:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:15.348 01:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:21:15.348 01:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:15.348 01:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:15.348 01:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:15.348 01:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:15.348 01:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.348 01:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:15.348 01:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.348 01:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.348 01:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.348 01:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:15.348 01:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:15.915 00:21:15.915 01:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:15.915 01:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:15.915 01:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.915 01:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.915 01:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.915 01:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.915 01:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.915 01:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.915 01:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:15.915 { 00:21:15.915 "cntlid": 111, 00:21:15.915 "qid": 0, 00:21:15.915 "state": "enabled", 00:21:15.915 "thread": "nvmf_tgt_poll_group_000", 00:21:15.915 "listen_address": { 00:21:15.915 "trtype": "TCP", 00:21:15.915 "adrfam": "IPv4", 00:21:15.915 "traddr": "10.0.0.2", 00:21:15.915 "trsvcid": "4420" 00:21:15.915 }, 00:21:15.915 "peer_address": { 00:21:15.915 "trtype": "TCP", 00:21:15.915 "adrfam": "IPv4", 00:21:15.915 "traddr": "10.0.0.1", 00:21:15.915 "trsvcid": "46096" 00:21:15.915 }, 00:21:15.915 "auth": { 00:21:15.915 "state": "completed", 00:21:15.915 "digest": "sha512", 00:21:15.915 "dhgroup": "ffdhe2048" 00:21:15.915 } 00:21:15.915 } 00:21:15.915 ]' 00:21:15.915 01:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:16.173 01:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:16.173 01:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:16.173 01:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:16.173 01:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:16.173 01:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.173 01:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.173 01:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.431 01:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NWQ0OTYwYzBkMzg5ZDc5YzhkYTk4OWE2NWRhYmM3MjUyOGQ3YTA0OWI0MmI3ZjhkMTI0OTAwZDljYjcyMzM1Y2lFBjk=: 00:21:17.365 01:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.365 01:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:17.365 01:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.365 01:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.365 01:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.366 01:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:17.366 01:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:17.366 01:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:17.366 01:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:17.623 01:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:21:17.623 01:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:17.623 01:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:17.623 01:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:17.623 01:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:17.623 01:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.623 01:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.623 01:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.623 01:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.623 01:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.623 01:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.623 01:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.881 00:21:17.881 01:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:17.881 01:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:17.881 01:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.140 01:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.140 01:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.140 01:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.140 01:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.140 01:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.140 01:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:18.140 { 00:21:18.140 "cntlid": 113, 00:21:18.140 "qid": 0, 00:21:18.140 "state": "enabled", 00:21:18.140 "thread": "nvmf_tgt_poll_group_000", 00:21:18.140 "listen_address": { 00:21:18.140 "trtype": "TCP", 00:21:18.140 "adrfam": "IPv4", 00:21:18.140 "traddr": "10.0.0.2", 00:21:18.140 "trsvcid": "4420" 00:21:18.140 }, 00:21:18.140 "peer_address": { 00:21:18.140 "trtype": "TCP", 00:21:18.140 "adrfam": "IPv4", 00:21:18.140 "traddr": "10.0.0.1", 00:21:18.140 "trsvcid": "46114" 00:21:18.140 }, 00:21:18.140 "auth": { 00:21:18.140 "state": "completed", 00:21:18.140 "digest": "sha512", 00:21:18.140 "dhgroup": "ffdhe3072" 00:21:18.140 } 00:21:18.140 } 00:21:18.140 ]' 00:21:18.398 01:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:18.398 01:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.398 01:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:18.398 01:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:18.398 01:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:18.398 01:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.398 01:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.398 01:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.683 01:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkwNmQ2NjQzMGM3MjFjMTBmY2Q1Yjk0NmZjNmZhMWIxNzg1MDc4OTNjYzI2N2UygDroDA==: --dhchap-ctrl-secret DHHC-1:03:ZTZlYWY3MzM1MDBhMWZhZjljMDgxNDczMTNlZDg0YjQzYWQ3NGUwNDMxOGJhMTc0YTZlN2Y2ZDA3ZmJkYzg1Y7MKYmg=: 00:21:19.621 01:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.621 01:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.621 01:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.621 01:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.621 01:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.621 01:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:19.621 01:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:19.621 01:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:19.879 01:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:21:19.879 01:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:19.879 01:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:19.879 01:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:19.879 01:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:19.879 01:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.879 01:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.879 01:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.879 01:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.879 01:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.879 01:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.879 01:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.137 00:21:20.137 01:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:20.137 01:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:20.137 01:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.396 01:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.396 01:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.396 01:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.396 01:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.396 01:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.396 01:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:20.396 { 00:21:20.396 "cntlid": 115, 00:21:20.396 "qid": 0, 00:21:20.396 "state": "enabled", 00:21:20.396 "thread": "nvmf_tgt_poll_group_000", 00:21:20.396 "listen_address": { 00:21:20.396 "trtype": "TCP", 00:21:20.396 "adrfam": "IPv4", 00:21:20.396 "traddr": "10.0.0.2", 00:21:20.396 "trsvcid": "4420" 00:21:20.396 }, 00:21:20.396 "peer_address": { 00:21:20.396 "trtype": "TCP", 00:21:20.396 "adrfam": "IPv4", 00:21:20.396 "traddr": "10.0.0.1", 00:21:20.396 "trsvcid": "41330" 00:21:20.396 }, 00:21:20.396 "auth": { 00:21:20.396 "state": "completed", 00:21:20.396 "digest": "sha512", 00:21:20.396 "dhgroup": "ffdhe3072" 00:21:20.396 } 00:21:20.396 } 00:21:20.396 ]' 00:21:20.396 01:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:20.654 01:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:20.654 01:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:20.654 01:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:20.654 01:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:20.654 01:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.654 01:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.654 01:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.911 01:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZWU5YjkxMGQzZDI4NzAxMzYyZGUyZGE0YzFhMjQ1MWabzSDy: --dhchap-ctrl-secret DHHC-1:02:NjgwZWViMmJiNmIwMDM5YzI3OTg1ZDczM2VlMjllZjMwNWNiYzI3N2I5YjU2ODZm6Vpb7Q==: 00:21:21.847 01:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.847 01:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:21.847 01:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.847 01:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.847 01:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.847 01:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:21.847 01:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:21.847 01:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:22.105 01:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:21:22.105 01:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:22.105 01:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:22.105 01:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:22.105 01:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:22.105 01:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.105 01:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.105 01:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.105 01:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.105 01:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.105 01:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.105 01:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.671 00:21:22.671 01:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:22.671 01:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:22.671 01:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.929 01:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.929 01:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.929 01:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.929 01:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.929 01:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.929 01:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:22.929 { 00:21:22.929 "cntlid": 117, 00:21:22.929 "qid": 0, 00:21:22.929 "state": "enabled", 00:21:22.929 "thread": "nvmf_tgt_poll_group_000", 00:21:22.929 "listen_address": { 00:21:22.929 "trtype": "TCP", 00:21:22.929 "adrfam": "IPv4", 00:21:22.929 "traddr": "10.0.0.2", 00:21:22.929 "trsvcid": "4420" 00:21:22.929 }, 00:21:22.929 "peer_address": { 00:21:22.929 "trtype": "TCP", 00:21:22.929 "adrfam": "IPv4", 00:21:22.929 "traddr": "10.0.0.1", 00:21:22.929 "trsvcid": "41350" 00:21:22.929 }, 00:21:22.929 "auth": { 00:21:22.929 "state": "completed", 00:21:22.929 "digest": "sha512", 00:21:22.929 "dhgroup": "ffdhe3072" 00:21:22.929 } 00:21:22.929 } 00:21:22.929 ]' 00:21:22.929 01:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:22.929 01:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:22.929 01:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:22.929 01:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:22.929 01:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:22.929 01:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.929 01:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.929 01:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.188 01:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGJjZDc2M2E4ZDI5ZGQ2NDk5ZTJmNzg5OGE5NTMwNWYzZDg4ZWQxMDVkNmY0MmQ0SvADzQ==: --dhchap-ctrl-secret DHHC-1:01:NDFkNjVhYjJmZDBhZTkzY2FiOTA2NGYyZGFmZDE1MjLxfCW9: 00:21:24.123 01:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.123 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.123 01:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:24.123 01:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.123 01:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.123 01:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.123 01:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:24.123 01:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:24.123 01:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:24.381 01:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:21:24.381 01:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:24.381 01:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:24.381 01:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:24.381 01:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:24.381 01:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.381 01:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:24.381 01:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.381 01:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.381 01:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.381 01:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:24.381 01:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:24.950 00:21:24.950 01:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:24.950 01:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:24.950 01:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.208 01:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.208 01:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.208 01:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.208 01:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.208 01:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.208 01:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:25.208 { 00:21:25.208 "cntlid": 119, 00:21:25.208 "qid": 0, 00:21:25.208 "state": "enabled", 00:21:25.208 "thread": "nvmf_tgt_poll_group_000", 00:21:25.208 "listen_address": { 00:21:25.208 "trtype": "TCP", 00:21:25.208 "adrfam": "IPv4", 00:21:25.208 "traddr": "10.0.0.2", 00:21:25.208 "trsvcid": "4420" 00:21:25.208 }, 00:21:25.208 "peer_address": { 00:21:25.208 "trtype": "TCP", 00:21:25.208 "adrfam": "IPv4", 00:21:25.208 "traddr": "10.0.0.1", 00:21:25.208 "trsvcid": "41378" 00:21:25.208 }, 00:21:25.208 "auth": { 00:21:25.208 "state": "completed", 00:21:25.208 "digest": "sha512", 00:21:25.208 "dhgroup": "ffdhe3072" 00:21:25.208 } 00:21:25.208 } 00:21:25.208 ]' 00:21:25.208 01:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:25.208 01:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:25.208 01:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:25.208 01:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:25.208 01:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:25.208 01:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.208 01:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.208 01:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.466 01:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NWQ0OTYwYzBkMzg5ZDc5YzhkYTk4OWE2NWRhYmM3MjUyOGQ3YTA0OWI0MmI3ZjhkMTI0OTAwZDljYjcyMzM1Y2lFBjk=: 00:21:26.399 01:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.399 01:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.399 01:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.399 01:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.399 01:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.399 01:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:26.399 01:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:26.399 01:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:26.399 01:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:26.657 01:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:21:26.657 01:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:26.657 01:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:26.657 01:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:26.657 01:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:26.657 01:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.657 01:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.657 01:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.657 01:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.657 01:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.657 01:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.657 01:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.222 00:21:27.222 01:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:27.222 01:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:27.222 01:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.479 01:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.479 01:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.479 01:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.479 01:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.479 01:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.479 01:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:27.479 { 00:21:27.479 "cntlid": 121, 00:21:27.479 "qid": 0, 00:21:27.479 "state": "enabled", 00:21:27.479 "thread": "nvmf_tgt_poll_group_000", 00:21:27.479 "listen_address": { 00:21:27.479 "trtype": "TCP", 00:21:27.479 "adrfam": "IPv4", 00:21:27.479 "traddr": "10.0.0.2", 00:21:27.479 "trsvcid": "4420" 00:21:27.479 }, 00:21:27.479 "peer_address": { 00:21:27.479 "trtype": "TCP", 00:21:27.479 "adrfam": "IPv4", 00:21:27.479 "traddr": "10.0.0.1", 00:21:27.479 "trsvcid": "41398" 00:21:27.479 }, 00:21:27.479 "auth": { 00:21:27.479 "state": "completed", 00:21:27.479 "digest": "sha512", 00:21:27.479 "dhgroup": "ffdhe4096" 00:21:27.479 } 00:21:27.479 } 00:21:27.479 ]' 00:21:27.479 01:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:27.479 01:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.479 01:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:27.479 01:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:27.479 01:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:27.479 01:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.479 01:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.479 01:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.738 01:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkwNmQ2NjQzMGM3MjFjMTBmY2Q1Yjk0NmZjNmZhMWIxNzg1MDc4OTNjYzI2N2UygDroDA==: --dhchap-ctrl-secret DHHC-1:03:ZTZlYWY3MzM1MDBhMWZhZjljMDgxNDczMTNlZDg0YjQzYWQ3NGUwNDMxOGJhMTc0YTZlN2Y2ZDA3ZmJkYzg1Y7MKYmg=: 00:21:28.671 01:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.671 01:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.671 01:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.671 01:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.671 01:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.671 01:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:28.671 01:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:28.671 01:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:28.929 01:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:21:28.929 01:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:28.929 01:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:28.929 01:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:28.929 01:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:28.929 01:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.929 01:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.929 01:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.929 01:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.929 01:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.929 01:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.929 01:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.495 00:21:29.495 01:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:29.495 01:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:29.495 01:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.752 01:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.752 01:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.752 01:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.752 01:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.752 01:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.752 01:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:29.752 { 00:21:29.752 "cntlid": 123, 00:21:29.752 "qid": 0, 00:21:29.752 "state": "enabled", 00:21:29.752 "thread": "nvmf_tgt_poll_group_000", 00:21:29.752 "listen_address": { 00:21:29.752 "trtype": "TCP", 00:21:29.752 "adrfam": "IPv4", 00:21:29.753 "traddr": "10.0.0.2", 00:21:29.753 "trsvcid": "4420" 00:21:29.753 }, 00:21:29.753 "peer_address": { 00:21:29.753 "trtype": "TCP", 00:21:29.753 "adrfam": "IPv4", 00:21:29.753 "traddr": "10.0.0.1", 00:21:29.753 "trsvcid": "40388" 00:21:29.753 }, 00:21:29.753 "auth": { 00:21:29.753 "state": "completed", 00:21:29.753 "digest": "sha512", 00:21:29.753 "dhgroup": "ffdhe4096" 00:21:29.753 } 00:21:29.753 } 00:21:29.753 ]' 00:21:29.753 01:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:29.753 01:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.753 01:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:29.753 01:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:29.753 01:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:29.753 01:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.753 01:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.753 01:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.012 01:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZWU5YjkxMGQzZDI4NzAxMzYyZGUyZGE0YzFhMjQ1MWabzSDy: --dhchap-ctrl-secret DHHC-1:02:NjgwZWViMmJiNmIwMDM5YzI3OTg1ZDczM2VlMjllZjMwNWNiYzI3N2I5YjU2ODZm6Vpb7Q==: 00:21:30.950 01:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.950 01:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:30.950 01:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.950 01:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.950 01:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.950 01:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:30.950 01:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:30.950 01:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:31.207 01:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:21:31.207 01:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:31.207 01:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:31.207 01:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:31.207 01:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:31.207 01:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.207 01:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.207 01:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.207 01:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.207 01:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.207 01:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.207 01:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.775 00:21:31.775 01:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:31.775 01:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:31.775 01:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.033 01:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.033 01:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.033 01:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.033 01:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.033 01:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.033 01:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:32.033 { 00:21:32.033 "cntlid": 125, 00:21:32.033 "qid": 0, 00:21:32.033 "state": "enabled", 00:21:32.033 "thread": "nvmf_tgt_poll_group_000", 00:21:32.033 "listen_address": { 00:21:32.033 "trtype": "TCP", 00:21:32.033 "adrfam": "IPv4", 00:21:32.033 "traddr": "10.0.0.2", 00:21:32.033 "trsvcid": "4420" 00:21:32.033 }, 00:21:32.033 "peer_address": { 00:21:32.033 "trtype": "TCP", 00:21:32.033 "adrfam": "IPv4", 00:21:32.033 "traddr": "10.0.0.1", 00:21:32.033 "trsvcid": "40406" 00:21:32.033 }, 00:21:32.033 "auth": { 00:21:32.033 "state": "completed", 00:21:32.033 "digest": "sha512", 00:21:32.033 "dhgroup": "ffdhe4096" 00:21:32.033 } 00:21:32.033 } 00:21:32.033 ]' 00:21:32.033 01:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:32.033 01:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:32.033 01:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:32.033 01:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:32.033 01:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:32.033 01:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.033 01:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.034 01:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.293 01:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGJjZDc2M2E4ZDI5ZGQ2NDk5ZTJmNzg5OGE5NTMwNWYzZDg4ZWQxMDVkNmY0MmQ0SvADzQ==: --dhchap-ctrl-secret DHHC-1:01:NDFkNjVhYjJmZDBhZTkzY2FiOTA2NGYyZGFmZDE1MjLxfCW9: 00:21:33.227 01:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.227 01:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.227 01:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.227 01:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.227 01:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.227 01:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:33.227 01:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:33.227 01:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:33.485 01:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:21:33.485 01:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:33.485 01:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:33.485 01:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:33.485 01:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:33.485 01:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.485 01:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:33.485 01:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.485 01:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.485 01:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.485 01:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:33.485 01:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:34.051 00:21:34.051 01:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:34.051 01:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:34.051 01:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.309 01:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.309 01:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.309 01:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.309 01:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.309 01:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.309 01:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:34.309 { 00:21:34.309 "cntlid": 127, 00:21:34.309 "qid": 0, 00:21:34.309 "state": "enabled", 00:21:34.309 "thread": "nvmf_tgt_poll_group_000", 00:21:34.309 "listen_address": { 00:21:34.309 "trtype": "TCP", 00:21:34.309 "adrfam": "IPv4", 00:21:34.309 "traddr": "10.0.0.2", 00:21:34.309 "trsvcid": "4420" 00:21:34.309 }, 00:21:34.309 "peer_address": { 00:21:34.309 "trtype": "TCP", 00:21:34.309 "adrfam": "IPv4", 00:21:34.309 "traddr": "10.0.0.1", 00:21:34.309 "trsvcid": "40436" 00:21:34.309 }, 00:21:34.309 "auth": { 00:21:34.309 "state": "completed", 00:21:34.309 "digest": "sha512", 00:21:34.309 "dhgroup": "ffdhe4096" 00:21:34.309 } 00:21:34.309 } 00:21:34.309 ]' 00:21:34.309 01:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:34.309 01:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.309 01:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:34.309 01:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:34.309 01:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:34.309 01:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.309 01:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.309 01:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.569 01:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NWQ0OTYwYzBkMzg5ZDc5YzhkYTk4OWE2NWRhYmM3MjUyOGQ3YTA0OWI0MmI3ZjhkMTI0OTAwZDljYjcyMzM1Y2lFBjk=: 00:21:35.542 01:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.542 01:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:35.542 01:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.542 01:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.542 01:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.542 01:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:35.542 01:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:35.542 01:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:35.542 01:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:35.800 01:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:21:35.800 01:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:35.800 01:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:35.800 01:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:35.800 01:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:35.800 01:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.800 01:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.800 01:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.800 01:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.800 01:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.800 01:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.800 01:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.366 00:21:36.366 01:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:36.366 01:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:36.366 01:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.624 01:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.624 01:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.624 01:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.624 01:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.624 01:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.624 01:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:36.624 { 00:21:36.624 "cntlid": 129, 00:21:36.624 "qid": 0, 00:21:36.624 "state": "enabled", 00:21:36.624 "thread": "nvmf_tgt_poll_group_000", 00:21:36.624 "listen_address": { 00:21:36.624 "trtype": "TCP", 00:21:36.624 "adrfam": "IPv4", 00:21:36.624 "traddr": "10.0.0.2", 00:21:36.624 "trsvcid": "4420" 00:21:36.624 }, 00:21:36.624 "peer_address": { 00:21:36.624 "trtype": "TCP", 00:21:36.624 "adrfam": "IPv4", 00:21:36.624 "traddr": "10.0.0.1", 00:21:36.624 "trsvcid": "40464" 00:21:36.624 }, 00:21:36.624 "auth": { 00:21:36.624 "state": "completed", 00:21:36.624 "digest": "sha512", 00:21:36.624 "dhgroup": "ffdhe6144" 00:21:36.624 } 00:21:36.624 } 00:21:36.624 ]' 00:21:36.624 01:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:36.882 01:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.882 01:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:36.882 01:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:36.882 01:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:36.882 01:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.882 01:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.882 01:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.140 01:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkwNmQ2NjQzMGM3MjFjMTBmY2Q1Yjk0NmZjNmZhMWIxNzg1MDc4OTNjYzI2N2UygDroDA==: --dhchap-ctrl-secret DHHC-1:03:ZTZlYWY3MzM1MDBhMWZhZjljMDgxNDczMTNlZDg0YjQzYWQ3NGUwNDMxOGJhMTc0YTZlN2Y2ZDA3ZmJkYzg1Y7MKYmg=: 00:21:38.076 01:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.076 01:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:38.076 01:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.076 01:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.076 01:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.076 01:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:38.076 01:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:38.076 01:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:38.334 01:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:21:38.335 01:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:38.335 01:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:38.335 01:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:38.335 01:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:38.335 01:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.335 01:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.335 01:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.335 01:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.335 01:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.335 01:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.335 01:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.903 00:21:38.903 01:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:38.903 01:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:38.903 01:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.161 01:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.161 01:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.161 01:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.161 01:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.161 01:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.161 01:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:39.161 { 00:21:39.161 "cntlid": 131, 00:21:39.161 "qid": 0, 00:21:39.161 "state": "enabled", 00:21:39.161 "thread": "nvmf_tgt_poll_group_000", 00:21:39.161 "listen_address": { 00:21:39.161 "trtype": "TCP", 00:21:39.161 "adrfam": "IPv4", 00:21:39.161 "traddr": "10.0.0.2", 00:21:39.161 "trsvcid": "4420" 00:21:39.161 }, 00:21:39.161 "peer_address": { 00:21:39.161 "trtype": "TCP", 00:21:39.161 "adrfam": "IPv4", 00:21:39.161 "traddr": "10.0.0.1", 00:21:39.161 "trsvcid": "40482" 00:21:39.161 }, 00:21:39.161 "auth": { 00:21:39.161 "state": "completed", 00:21:39.161 "digest": "sha512", 00:21:39.161 "dhgroup": "ffdhe6144" 00:21:39.161 } 00:21:39.161 } 00:21:39.161 ]' 00:21:39.161 01:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:39.161 01:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.161 01:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:39.161 01:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:39.161 01:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:39.161 01:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.161 01:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.161 01:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.420 01:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZWU5YjkxMGQzZDI4NzAxMzYyZGUyZGE0YzFhMjQ1MWabzSDy: --dhchap-ctrl-secret DHHC-1:02:NjgwZWViMmJiNmIwMDM5YzI3OTg1ZDczM2VlMjllZjMwNWNiYzI3N2I5YjU2ODZm6Vpb7Q==: 00:21:40.357 01:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.615 01:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:40.615 01:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.615 01:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.615 01:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.615 01:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:40.615 01:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:40.615 01:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:40.873 01:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:21:40.873 01:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:40.873 01:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:40.873 01:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:40.874 01:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:40.874 01:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.874 01:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.874 01:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.874 01:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.874 01:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.874 01:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.874 01:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.446 00:21:41.446 01:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:41.446 01:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:41.446 01:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.704 01:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.704 01:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.704 01:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.704 01:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.704 01:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.704 01:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:41.704 { 00:21:41.704 "cntlid": 133, 00:21:41.704 "qid": 0, 00:21:41.704 "state": "enabled", 00:21:41.704 "thread": "nvmf_tgt_poll_group_000", 00:21:41.704 "listen_address": { 00:21:41.704 "trtype": "TCP", 00:21:41.704 "adrfam": "IPv4", 00:21:41.704 "traddr": "10.0.0.2", 00:21:41.704 "trsvcid": "4420" 00:21:41.704 }, 00:21:41.704 "peer_address": { 00:21:41.704 "trtype": "TCP", 00:21:41.704 "adrfam": "IPv4", 00:21:41.704 "traddr": "10.0.0.1", 00:21:41.704 "trsvcid": "58906" 00:21:41.704 }, 00:21:41.704 "auth": { 00:21:41.704 "state": "completed", 00:21:41.704 "digest": "sha512", 00:21:41.704 "dhgroup": "ffdhe6144" 00:21:41.704 } 00:21:41.704 } 00:21:41.704 ]' 00:21:41.704 01:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:41.704 01:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.704 01:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:41.704 01:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:41.704 01:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:41.704 01:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.704 01:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.704 01:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.962 01:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGJjZDc2M2E4ZDI5ZGQ2NDk5ZTJmNzg5OGE5NTMwNWYzZDg4ZWQxMDVkNmY0MmQ0SvADzQ==: --dhchap-ctrl-secret DHHC-1:01:NDFkNjVhYjJmZDBhZTkzY2FiOTA2NGYyZGFmZDE1MjLxfCW9: 00:21:42.897 01:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.897 01:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:43.154 01:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.154 01:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.154 01:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.154 01:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:43.154 01:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:43.154 01:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:43.412 01:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:21:43.412 01:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:43.412 01:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:43.412 01:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:43.412 01:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:43.412 01:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.412 01:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:43.412 01:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.412 01:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.412 01:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.412 01:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:43.412 01:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:43.977 00:21:43.977 01:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:43.977 01:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.977 01:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:44.235 01:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.235 01:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.235 01:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.235 01:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.235 01:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.235 01:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:44.235 { 00:21:44.235 "cntlid": 135, 00:21:44.235 "qid": 0, 00:21:44.235 "state": "enabled", 00:21:44.235 "thread": "nvmf_tgt_poll_group_000", 00:21:44.235 "listen_address": { 00:21:44.235 "trtype": "TCP", 00:21:44.235 "adrfam": "IPv4", 00:21:44.235 "traddr": "10.0.0.2", 00:21:44.235 "trsvcid": "4420" 00:21:44.235 }, 00:21:44.235 "peer_address": { 00:21:44.235 "trtype": "TCP", 00:21:44.235 "adrfam": "IPv4", 00:21:44.235 "traddr": "10.0.0.1", 00:21:44.235 "trsvcid": "58922" 00:21:44.235 }, 00:21:44.235 "auth": { 00:21:44.235 "state": "completed", 00:21:44.235 "digest": "sha512", 00:21:44.235 "dhgroup": "ffdhe6144" 00:21:44.235 } 00:21:44.235 } 00:21:44.235 ]' 00:21:44.235 01:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:44.235 01:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:44.235 01:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:44.235 01:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:44.235 01:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:44.235 01:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.235 01:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.235 01:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.495 01:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NWQ0OTYwYzBkMzg5ZDc5YzhkYTk4OWE2NWRhYmM3MjUyOGQ3YTA0OWI0MmI3ZjhkMTI0OTAwZDljYjcyMzM1Y2lFBjk=: 00:21:45.429 01:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.429 01:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:45.429 01:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.429 01:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.429 01:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.429 01:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:45.429 01:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:45.429 01:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:45.429 01:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:45.998 01:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:21:45.998 01:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:45.998 01:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:45.998 01:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:45.998 01:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:45.998 01:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.998 01:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.998 01:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.998 01:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.998 01:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.998 01:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.999 01:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:46.935 00:21:46.935 01:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:46.935 01:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.935 01:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:46.935 01:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.935 01:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.935 01:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.935 01:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.193 01:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.193 01:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:47.193 { 00:21:47.193 "cntlid": 137, 00:21:47.193 "qid": 0, 00:21:47.193 "state": "enabled", 00:21:47.193 "thread": "nvmf_tgt_poll_group_000", 00:21:47.193 "listen_address": { 00:21:47.193 "trtype": "TCP", 00:21:47.193 "adrfam": "IPv4", 00:21:47.193 "traddr": "10.0.0.2", 00:21:47.193 "trsvcid": "4420" 00:21:47.193 }, 00:21:47.193 "peer_address": { 00:21:47.193 "trtype": "TCP", 00:21:47.193 "adrfam": "IPv4", 00:21:47.193 "traddr": "10.0.0.1", 00:21:47.193 "trsvcid": "58944" 00:21:47.193 }, 00:21:47.193 "auth": { 00:21:47.193 "state": "completed", 00:21:47.193 "digest": "sha512", 00:21:47.193 "dhgroup": "ffdhe8192" 00:21:47.193 } 00:21:47.193 } 00:21:47.193 ]' 00:21:47.193 01:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:47.193 01:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.193 01:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:47.193 01:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:47.193 01:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:47.193 01:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.193 01:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.193 01:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.451 01:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkwNmQ2NjQzMGM3MjFjMTBmY2Q1Yjk0NmZjNmZhMWIxNzg1MDc4OTNjYzI2N2UygDroDA==: --dhchap-ctrl-secret DHHC-1:03:ZTZlYWY3MzM1MDBhMWZhZjljMDgxNDczMTNlZDg0YjQzYWQ3NGUwNDMxOGJhMTc0YTZlN2Y2ZDA3ZmJkYzg1Y7MKYmg=: 00:21:48.386 01:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.386 01:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:48.386 01:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.386 01:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.386 01:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.386 01:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:48.386 01:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:48.386 01:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:48.643 01:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:21:48.643 01:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:48.644 01:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:48.644 01:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:48.644 01:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:48.644 01:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.644 01:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.644 01:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.644 01:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.644 01:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.644 01:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.644 01:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.578 00:21:49.578 01:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:49.578 01:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:49.579 01:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.837 01:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.837 01:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.837 01:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.837 01:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.837 01:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.837 01:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:49.837 { 00:21:49.837 "cntlid": 139, 00:21:49.837 "qid": 0, 00:21:49.837 "state": "enabled", 00:21:49.837 "thread": "nvmf_tgt_poll_group_000", 00:21:49.837 "listen_address": { 00:21:49.837 "trtype": "TCP", 00:21:49.837 "adrfam": "IPv4", 00:21:49.837 "traddr": "10.0.0.2", 00:21:49.837 "trsvcid": "4420" 00:21:49.837 }, 00:21:49.837 "peer_address": { 00:21:49.837 "trtype": "TCP", 00:21:49.837 "adrfam": "IPv4", 00:21:49.837 "traddr": "10.0.0.1", 00:21:49.837 "trsvcid": "48792" 00:21:49.837 }, 00:21:49.837 "auth": { 00:21:49.837 "state": "completed", 00:21:49.837 "digest": "sha512", 00:21:49.837 "dhgroup": "ffdhe8192" 00:21:49.837 } 00:21:49.837 } 00:21:49.837 ]' 00:21:49.837 01:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:49.837 01:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.837 01:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:49.837 01:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:49.837 01:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:49.837 01:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.837 01:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.837 01:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.405 01:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZWU5YjkxMGQzZDI4NzAxMzYyZGUyZGE0YzFhMjQ1MWabzSDy: --dhchap-ctrl-secret DHHC-1:02:NjgwZWViMmJiNmIwMDM5YzI3OTg1ZDczM2VlMjllZjMwNWNiYzI3N2I5YjU2ODZm6Vpb7Q==: 00:21:51.342 01:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.342 01:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:51.342 01:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.342 01:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.342 01:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.342 01:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:51.342 01:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:51.342 01:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:51.599 01:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:21:51.599 01:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:51.599 01:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:51.599 01:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:51.599 01:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:51.599 01:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.599 01:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.599 01:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.599 01:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.599 01:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.599 01:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.600 01:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.564 00:21:52.564 01:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:52.564 01:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:52.564 01:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.564 01:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.564 01:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.564 01:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.564 01:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.564 01:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.564 01:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:52.564 { 00:21:52.564 "cntlid": 141, 00:21:52.564 "qid": 0, 00:21:52.564 "state": "enabled", 00:21:52.564 "thread": "nvmf_tgt_poll_group_000", 00:21:52.564 "listen_address": { 00:21:52.564 "trtype": "TCP", 00:21:52.564 "adrfam": "IPv4", 00:21:52.564 "traddr": "10.0.0.2", 00:21:52.564 "trsvcid": "4420" 00:21:52.564 }, 00:21:52.564 "peer_address": { 00:21:52.564 "trtype": "TCP", 00:21:52.564 "adrfam": "IPv4", 00:21:52.564 "traddr": "10.0.0.1", 00:21:52.564 "trsvcid": "48832" 00:21:52.564 }, 00:21:52.564 "auth": { 00:21:52.564 "state": "completed", 00:21:52.564 "digest": "sha512", 00:21:52.564 "dhgroup": "ffdhe8192" 00:21:52.564 } 00:21:52.564 } 00:21:52.564 ]' 00:21:52.564 01:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:52.821 01:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.821 01:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:52.821 01:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:52.821 01:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:52.821 01:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.821 01:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.821 01:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.079 01:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NGJjZDc2M2E4ZDI5ZGQ2NDk5ZTJmNzg5OGE5NTMwNWYzZDg4ZWQxMDVkNmY0MmQ0SvADzQ==: --dhchap-ctrl-secret DHHC-1:01:NDFkNjVhYjJmZDBhZTkzY2FiOTA2NGYyZGFmZDE1MjLxfCW9: 00:21:54.011 01:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.011 01:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.011 01:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.011 01:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.011 01:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.011 01:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:54.011 01:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:54.011 01:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:54.268 01:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:21:54.268 01:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:54.268 01:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:54.268 01:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:54.268 01:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:54.268 01:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.268 01:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:54.268 01:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.268 01:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.268 01:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.268 01:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:54.268 01:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:55.201 00:21:55.201 01:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:55.201 01:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:55.201 01:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.458 01:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.458 01:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.458 01:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.458 01:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.458 01:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.458 01:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:55.458 { 00:21:55.458 "cntlid": 143, 00:21:55.458 "qid": 0, 00:21:55.458 "state": "enabled", 00:21:55.458 "thread": "nvmf_tgt_poll_group_000", 00:21:55.458 "listen_address": { 00:21:55.458 "trtype": "TCP", 00:21:55.458 "adrfam": "IPv4", 00:21:55.458 "traddr": "10.0.0.2", 00:21:55.458 "trsvcid": "4420" 00:21:55.458 }, 00:21:55.458 "peer_address": { 00:21:55.458 "trtype": "TCP", 00:21:55.458 "adrfam": "IPv4", 00:21:55.458 "traddr": "10.0.0.1", 00:21:55.458 "trsvcid": "48874" 00:21:55.458 }, 00:21:55.458 "auth": { 00:21:55.458 "state": "completed", 00:21:55.458 "digest": "sha512", 00:21:55.458 "dhgroup": "ffdhe8192" 00:21:55.458 } 00:21:55.458 } 00:21:55.458 ]' 00:21:55.458 01:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:55.458 01:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.458 01:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:55.715 01:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:55.715 01:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:55.715 01:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.715 01:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.715 01:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.972 01:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NWQ0OTYwYzBkMzg5ZDc5YzhkYTk4OWE2NWRhYmM3MjUyOGQ3YTA0OWI0MmI3ZjhkMTI0OTAwZDljYjcyMzM1Y2lFBjk=: 00:21:56.906 01:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.906 01:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.906 01:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.906 01:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.906 01:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.906 01:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:56.906 01:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:21:56.906 01:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:56.906 01:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:56.906 01:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:56.906 01:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:57.164 01:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:21:57.164 01:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:57.164 01:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:57.164 01:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:57.164 01:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:57.164 01:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.164 01:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.164 01:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.164 01:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.164 01:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.164 01:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.164 01:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.100 00:21:58.100 01:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:58.100 01:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:58.100 01:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.100 01:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.100 01:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.100 01:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.100 01:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.100 01:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.100 01:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:58.100 { 00:21:58.100 "cntlid": 145, 00:21:58.100 "qid": 0, 00:21:58.100 "state": "enabled", 00:21:58.100 "thread": "nvmf_tgt_poll_group_000", 00:21:58.100 "listen_address": { 00:21:58.100 "trtype": "TCP", 00:21:58.100 "adrfam": "IPv4", 00:21:58.100 "traddr": "10.0.0.2", 00:21:58.100 "trsvcid": "4420" 00:21:58.100 }, 00:21:58.100 "peer_address": { 00:21:58.100 "trtype": "TCP", 00:21:58.100 "adrfam": "IPv4", 00:21:58.100 "traddr": "10.0.0.1", 00:21:58.100 "trsvcid": "48892" 00:21:58.100 }, 00:21:58.100 "auth": { 00:21:58.100 "state": "completed", 00:21:58.100 "digest": "sha512", 00:21:58.100 "dhgroup": "ffdhe8192" 00:21:58.100 } 00:21:58.100 } 00:21:58.100 ]' 00:21:58.100 01:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:58.358 01:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:58.358 01:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:58.358 01:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:58.358 01:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:58.358 01:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.358 01:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.358 01:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.616 01:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkwNmQ2NjQzMGM3MjFjMTBmY2Q1Yjk0NmZjNmZhMWIxNzg1MDc4OTNjYzI2N2UygDroDA==: --dhchap-ctrl-secret DHHC-1:03:ZTZlYWY3MzM1MDBhMWZhZjljMDgxNDczMTNlZDg0YjQzYWQ3NGUwNDMxOGJhMTc0YTZlN2Y2ZDA3ZmJkYzg1Y7MKYmg=: 00:21:59.554 01:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.554 01:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:59.554 01:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.554 01:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.554 01:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.554 01:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:59.554 01:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.554 01:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.554 01:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.554 01:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:59.554 01:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:59.554 01:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:59.554 01:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:59.554 01:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:59.554 01:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:59.554 01:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:59.554 01:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:59.554 01:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:00.491 request: 00:22:00.491 { 00:22:00.491 "name": "nvme0", 00:22:00.491 "trtype": "tcp", 00:22:00.491 "traddr": "10.0.0.2", 00:22:00.491 "adrfam": "ipv4", 00:22:00.491 "trsvcid": "4420", 00:22:00.491 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:00.491 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:00.491 "prchk_reftag": false, 00:22:00.491 "prchk_guard": false, 00:22:00.492 "hdgst": false, 00:22:00.492 "ddgst": false, 00:22:00.492 "dhchap_key": "key2", 00:22:00.492 "method": "bdev_nvme_attach_controller", 00:22:00.492 "req_id": 1 00:22:00.492 } 00:22:00.492 Got JSON-RPC error response 00:22:00.492 response: 00:22:00.492 { 00:22:00.492 "code": -5, 00:22:00.492 "message": "Input/output error" 00:22:00.492 } 00:22:00.492 01:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:00.492 01:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:00.492 01:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:00.492 01:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:00.492 01:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:00.492 01:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.492 01:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.492 01:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.492 01:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:00.492 01:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.492 01:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.492 01:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.492 01:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:00.492 01:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:00.492 01:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:00.492 01:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:00.492 01:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:00.492 01:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:00.492 01:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:00.492 01:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:00.492 01:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:01.426 request: 00:22:01.426 { 00:22:01.426 "name": "nvme0", 00:22:01.426 "trtype": "tcp", 00:22:01.426 "traddr": "10.0.0.2", 00:22:01.426 "adrfam": "ipv4", 00:22:01.426 "trsvcid": "4420", 00:22:01.426 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:01.426 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:01.426 "prchk_reftag": false, 00:22:01.426 "prchk_guard": false, 00:22:01.426 "hdgst": false, 00:22:01.426 "ddgst": false, 00:22:01.426 "dhchap_key": "key1", 00:22:01.426 "dhchap_ctrlr_key": "ckey2", 00:22:01.426 "method": "bdev_nvme_attach_controller", 00:22:01.426 "req_id": 1 00:22:01.426 } 00:22:01.426 Got JSON-RPC error response 00:22:01.426 response: 00:22:01.426 { 00:22:01.426 "code": -5, 00:22:01.426 "message": "Input/output error" 00:22:01.426 } 00:22:01.426 01:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:01.426 01:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:01.426 01:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:01.426 01:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:01.426 01:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:01.426 01:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.426 01:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.426 01:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.426 01:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:01.426 01:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.426 01:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.426 01:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.426 01:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.426 01:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:01.426 01:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.426 01:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:01.426 01:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:01.426 01:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:01.426 01:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:01.426 01:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.426 01:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.364 request: 00:22:02.364 { 00:22:02.364 "name": "nvme0", 00:22:02.364 "trtype": "tcp", 00:22:02.364 "traddr": "10.0.0.2", 00:22:02.364 "adrfam": "ipv4", 00:22:02.364 "trsvcid": "4420", 00:22:02.364 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:02.364 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:02.364 "prchk_reftag": false, 00:22:02.364 "prchk_guard": false, 00:22:02.364 "hdgst": false, 00:22:02.364 "ddgst": false, 00:22:02.365 "dhchap_key": "key1", 00:22:02.365 "dhchap_ctrlr_key": "ckey1", 00:22:02.365 "method": "bdev_nvme_attach_controller", 00:22:02.365 "req_id": 1 00:22:02.365 } 00:22:02.365 Got JSON-RPC error response 00:22:02.365 response: 00:22:02.365 { 00:22:02.365 "code": -5, 00:22:02.365 "message": "Input/output error" 00:22:02.365 } 00:22:02.365 01:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:02.365 01:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:02.365 01:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:02.365 01:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:02.365 01:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:02.365 01:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.365 01:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.365 01:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.365 01:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1834606 00:22:02.365 01:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1834606 ']' 00:22:02.365 01:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1834606 00:22:02.365 01:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:02.365 01:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:02.365 01:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1834606 00:22:02.365 01:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:02.365 01:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:02.365 01:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1834606' 00:22:02.365 killing process with pid 1834606 00:22:02.365 01:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1834606 00:22:02.365 01:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1834606 00:22:02.623 01:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:02.623 01:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:02.623 01:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:02.623 01:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.623 01:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1857090 00:22:02.623 01:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:02.623 01:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1857090 00:22:02.623 01:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1857090 ']' 00:22:02.623 01:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.623 01:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:02.623 01:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.623 01:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:02.623 01:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.881 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:02.881 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:22:02.881 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:02.881 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:02.881 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.881 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:02.881 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:02.881 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1857090 00:22:02.881 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1857090 ']' 00:22:02.881 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.881 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:02.881 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.881 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:02.881 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.152 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:03.152 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:22:03.152 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:22:03.152 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.152 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.152 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.152 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:22:03.152 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:03.152 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:03.152 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:03.152 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:03.152 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.152 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:03.152 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.152 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.152 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.152 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:03.152 01:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:04.091 00:22:04.091 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:04.091 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:04.091 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.349 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.349 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.349 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.349 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.349 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.349 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:04.349 { 00:22:04.349 "cntlid": 1, 00:22:04.349 "qid": 0, 00:22:04.349 "state": "enabled", 00:22:04.349 "thread": "nvmf_tgt_poll_group_000", 00:22:04.349 "listen_address": { 00:22:04.349 "trtype": "TCP", 00:22:04.349 "adrfam": "IPv4", 00:22:04.349 "traddr": "10.0.0.2", 00:22:04.349 "trsvcid": "4420" 00:22:04.349 }, 00:22:04.349 "peer_address": { 00:22:04.349 "trtype": "TCP", 00:22:04.349 "adrfam": "IPv4", 00:22:04.349 "traddr": "10.0.0.1", 00:22:04.349 "trsvcid": "32824" 00:22:04.349 }, 00:22:04.349 "auth": { 00:22:04.349 "state": "completed", 00:22:04.349 "digest": "sha512", 00:22:04.349 "dhgroup": "ffdhe8192" 00:22:04.349 } 00:22:04.349 } 00:22:04.349 ]' 00:22:04.349 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:04.349 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:04.349 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:04.349 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:04.349 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:04.349 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.349 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.349 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.607 01:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NWQ0OTYwYzBkMzg5ZDc5YzhkYTk4OWE2NWRhYmM3MjUyOGQ3YTA0OWI0MmI3ZjhkMTI0OTAwZDljYjcyMzM1Y2lFBjk=: 00:22:05.545 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.545 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:05.545 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.545 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.545 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.545 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:05.545 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.545 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.545 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.545 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:05.545 01:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:05.804 01:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:05.804 01:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:05.804 01:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:05.804 01:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:05.804 01:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:05.804 01:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:05.804 01:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:05.804 01:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:05.804 01:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:06.371 request: 00:22:06.371 { 00:22:06.371 "name": "nvme0", 00:22:06.371 "trtype": "tcp", 00:22:06.371 "traddr": "10.0.0.2", 00:22:06.371 "adrfam": "ipv4", 00:22:06.371 "trsvcid": "4420", 00:22:06.371 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:06.371 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:06.371 "prchk_reftag": false, 00:22:06.371 "prchk_guard": false, 00:22:06.371 "hdgst": false, 00:22:06.371 "ddgst": false, 00:22:06.371 "dhchap_key": "key3", 00:22:06.371 "method": "bdev_nvme_attach_controller", 00:22:06.371 "req_id": 1 00:22:06.371 } 00:22:06.371 Got JSON-RPC error response 00:22:06.371 response: 00:22:06.371 { 00:22:06.371 "code": -5, 00:22:06.371 "message": "Input/output error" 00:22:06.371 } 00:22:06.371 01:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:06.371 01:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:06.371 01:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:06.371 01:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:06.371 01:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:22:06.371 01:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:22:06.371 01:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:06.371 01:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:06.629 01:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:06.629 01:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:06.629 01:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:06.629 01:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:06.629 01:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:06.629 01:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:06.629 01:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:06.629 01:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:06.629 01:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:06.887 request: 00:22:06.887 { 00:22:06.887 "name": "nvme0", 00:22:06.887 "trtype": "tcp", 00:22:06.887 "traddr": "10.0.0.2", 00:22:06.887 "adrfam": "ipv4", 00:22:06.887 "trsvcid": "4420", 00:22:06.887 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:06.887 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:06.887 "prchk_reftag": false, 00:22:06.887 "prchk_guard": false, 00:22:06.887 "hdgst": false, 00:22:06.887 "ddgst": false, 00:22:06.887 "dhchap_key": "key3", 00:22:06.887 "method": "bdev_nvme_attach_controller", 00:22:06.887 "req_id": 1 00:22:06.887 } 00:22:06.887 Got JSON-RPC error response 00:22:06.887 response: 00:22:06.887 { 00:22:06.887 "code": -5, 00:22:06.887 "message": "Input/output error" 00:22:06.887 } 00:22:06.887 01:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:06.887 01:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:06.887 01:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:06.887 01:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:06.887 01:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:06.887 01:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:22:06.887 01:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:06.887 01:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:06.887 01:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:06.887 01:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:07.145 01:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:07.145 01:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.145 01:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.145 01:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.145 01:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:07.145 01:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.145 01:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.145 01:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.145 01:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:07.145 01:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:07.145 01:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:07.145 01:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:07.145 01:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:07.145 01:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:07.145 01:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:07.146 01:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:07.146 01:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:07.404 request: 00:22:07.404 { 00:22:07.404 "name": "nvme0", 00:22:07.404 "trtype": "tcp", 00:22:07.404 "traddr": "10.0.0.2", 00:22:07.404 "adrfam": "ipv4", 00:22:07.404 "trsvcid": "4420", 00:22:07.404 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:07.404 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:07.404 "prchk_reftag": false, 00:22:07.404 "prchk_guard": false, 00:22:07.404 "hdgst": false, 00:22:07.404 "ddgst": false, 00:22:07.404 "dhchap_key": "key0", 00:22:07.404 "dhchap_ctrlr_key": "key1", 00:22:07.404 "method": "bdev_nvme_attach_controller", 00:22:07.404 "req_id": 1 00:22:07.404 } 00:22:07.404 Got JSON-RPC error response 00:22:07.404 response: 00:22:07.404 { 00:22:07.404 "code": -5, 00:22:07.404 "message": "Input/output error" 00:22:07.404 } 00:22:07.404 01:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:07.404 01:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:07.404 01:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:07.404 01:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:07.404 01:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:07.404 01:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:07.662 00:22:07.662 01:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:22:07.662 01:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:22:07.662 01:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.920 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.920 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.920 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.178 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:22:08.178 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:22:08.178 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1834626 00:22:08.178 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1834626 ']' 00:22:08.178 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1834626 00:22:08.178 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:08.178 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:08.178 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1834626 00:22:08.178 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:08.178 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:08.178 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1834626' 00:22:08.178 killing process with pid 1834626 00:22:08.178 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1834626 00:22:08.178 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1834626 00:22:08.771 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:08.771 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:08.771 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:22:08.771 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:08.771 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:22:08.771 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:08.771 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:08.771 rmmod nvme_tcp 00:22:08.771 rmmod nvme_fabrics 00:22:08.771 rmmod nvme_keyring 00:22:08.771 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:08.771 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:22:08.771 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:22:08.771 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1857090 ']' 00:22:08.771 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1857090 00:22:08.771 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1857090 ']' 00:22:08.771 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1857090 00:22:08.771 01:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:08.771 01:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:08.771 01:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1857090 00:22:08.771 01:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:08.771 01:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:08.771 01:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1857090' 00:22:08.771 killing process with pid 1857090 00:22:08.771 01:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1857090 00:22:08.771 01:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1857090 00:22:09.033 01:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:09.034 01:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:09.034 01:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:09.034 01:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:09.034 01:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:09.034 01:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.034 01:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:09.034 01:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.936 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:10.936 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Bwo /tmp/spdk.key-sha256.1KW /tmp/spdk.key-sha384.vGi /tmp/spdk.key-sha512.0zL /tmp/spdk.key-sha512.LhX /tmp/spdk.key-sha384.FIL /tmp/spdk.key-sha256.4g2 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:10.936 00:22:10.936 real 3m8.991s 00:22:10.936 user 7m20.094s 00:22:10.936 sys 0m25.042s 00:22:10.936 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:10.936 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.936 ************************************ 00:22:10.936 END TEST nvmf_auth_target 00:22:10.936 ************************************ 00:22:10.936 01:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:10.937 01:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:10.937 01:04:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:22:10.937 01:04:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:10.937 01:04:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:10.937 ************************************ 00:22:10.937 START TEST nvmf_bdevio_no_huge 00:22:10.937 ************************************ 00:22:10.937 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:11.194 * Looking for test storage... 00:22:11.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:11.194 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:11.194 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:11.194 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:11.194 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:22:11.195 01:04:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:13.105 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:13.105 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:13.105 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:13.105 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:13.105 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:13.369 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:13.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:13.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:22:13.369 00:22:13.369 --- 10.0.0.2 ping statistics --- 00:22:13.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.369 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:22:13.369 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:13.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:13.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:22:13.369 00:22:13.369 --- 10.0.0.1 ping statistics --- 00:22:13.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.369 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:22:13.369 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:13.369 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:22:13.369 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:13.369 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:13.369 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:13.369 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:13.369 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:13.369 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:13.369 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:13.369 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:13.369 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:13.369 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:13.370 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:13.370 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1859857 00:22:13.370 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:13.370 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1859857 00:22:13.370 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 1859857 ']' 00:22:13.370 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:13.370 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:13.370 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:13.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:13.370 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:13.370 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:13.370 [2024-07-26 01:04:43.612210] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:22:13.370 [2024-07-26 01:04:43.612284] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:13.370 [2024-07-26 01:04:43.686692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:13.370 [2024-07-26 01:04:43.784609] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:13.370 [2024-07-26 01:04:43.784668] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:13.370 [2024-07-26 01:04:43.784689] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:13.370 [2024-07-26 01:04:43.784699] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:13.370 [2024-07-26 01:04:43.784709] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:13.370 [2024-07-26 01:04:43.784794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:13.370 [2024-07-26 01:04:43.784857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:22:13.370 [2024-07-26 01:04:43.784925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:22:13.370 [2024-07-26 01:04:43.784927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:13.629 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:13.629 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:22:13.629 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:13.629 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:13.629 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:13.629 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:13.629 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:13.629 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.629 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:13.629 [2024-07-26 01:04:43.907081] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:13.629 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.629 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:13.629 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.629 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:13.629 Malloc0 00:22:13.629 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.629 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:13.629 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.629 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:13.629 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.629 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:13.629 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.629 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:13.629 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.629 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:13.629 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.629 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:13.629 [2024-07-26 01:04:43.944999] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:13.629 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.629 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:13.629 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:13.629 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:22:13.630 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:22:13.630 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:13.630 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:13.630 { 00:22:13.630 "params": { 00:22:13.630 "name": "Nvme$subsystem", 00:22:13.630 "trtype": "$TEST_TRANSPORT", 00:22:13.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:13.630 "adrfam": "ipv4", 00:22:13.630 "trsvcid": "$NVMF_PORT", 00:22:13.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:13.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:13.630 "hdgst": ${hdgst:-false}, 00:22:13.630 "ddgst": ${ddgst:-false} 00:22:13.630 }, 00:22:13.630 "method": "bdev_nvme_attach_controller" 00:22:13.630 } 00:22:13.630 EOF 00:22:13.630 )") 00:22:13.630 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:22:13.630 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:22:13.630 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:22:13.630 01:04:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:13.630 "params": { 00:22:13.630 "name": "Nvme1", 00:22:13.630 "trtype": "tcp", 00:22:13.630 "traddr": "10.0.0.2", 00:22:13.630 "adrfam": "ipv4", 00:22:13.630 "trsvcid": "4420", 00:22:13.630 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:13.630 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:13.630 "hdgst": false, 00:22:13.630 "ddgst": false 00:22:13.630 }, 00:22:13.630 "method": "bdev_nvme_attach_controller" 00:22:13.630 }' 00:22:13.630 [2024-07-26 01:04:43.992841] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:22:13.630 [2024-07-26 01:04:43.992930] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1859887 ] 00:22:13.630 [2024-07-26 01:04:44.052961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:13.887 [2024-07-26 01:04:44.140660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:13.887 [2024-07-26 01:04:44.140709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:13.887 [2024-07-26 01:04:44.140712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.146 I/O targets: 00:22:14.146 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:14.146 00:22:14.146 00:22:14.146 CUnit - A unit testing framework for C - Version 2.1-3 00:22:14.146 http://cunit.sourceforge.net/ 00:22:14.146 00:22:14.146 00:22:14.146 Suite: bdevio tests on: Nvme1n1 00:22:14.146 Test: blockdev write read block ...passed 00:22:14.146 Test: blockdev write zeroes read block ...passed 00:22:14.146 Test: blockdev write zeroes read no split ...passed 00:22:14.146 Test: blockdev write zeroes read split ...passed 00:22:14.146 Test: blockdev write zeroes read split partial ...passed 00:22:14.146 Test: blockdev reset ...[2024-07-26 01:04:44.510219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:14.146 [2024-07-26 01:04:44.510338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f054e0 (9): Bad file descriptor 00:22:14.146 [2024-07-26 01:04:44.529370] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:14.146 passed 00:22:14.146 Test: blockdev write read 8 blocks ...passed 00:22:14.146 Test: blockdev write read size > 128k ...passed 00:22:14.146 Test: blockdev write read invalid size ...passed 00:22:14.404 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:14.404 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:14.404 Test: blockdev write read max offset ...passed 00:22:14.404 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:14.404 Test: blockdev writev readv 8 blocks ...passed 00:22:14.404 Test: blockdev writev readv 30 x 1block ...passed 00:22:14.404 Test: blockdev writev readv block ...passed 00:22:14.404 Test: blockdev writev readv size > 128k ...passed 00:22:14.404 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:14.404 Test: blockdev comparev and writev ...[2024-07-26 01:04:44.705731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:14.404 [2024-07-26 01:04:44.705767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:14.404 [2024-07-26 01:04:44.705791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:14.405 [2024-07-26 01:04:44.705807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:14.405 [2024-07-26 01:04:44.706137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:14.405 [2024-07-26 01:04:44.706162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:14.405 [2024-07-26 01:04:44.706184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:14.405 [2024-07-26 01:04:44.706201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:14.405 [2024-07-26 01:04:44.706528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:14.405 [2024-07-26 01:04:44.706557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:14.405 [2024-07-26 01:04:44.706579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:14.405 [2024-07-26 01:04:44.706595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:14.405 [2024-07-26 01:04:44.706915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:14.405 [2024-07-26 01:04:44.706940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:14.405 [2024-07-26 01:04:44.706961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:14.405 [2024-07-26 01:04:44.706977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:14.405 passed 00:22:14.405 Test: blockdev nvme passthru rw ...passed 00:22:14.405 Test: blockdev nvme passthru vendor specific ...[2024-07-26 01:04:44.791366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:14.405 [2024-07-26 01:04:44.791395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:14.405 [2024-07-26 01:04:44.791556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:14.405 [2024-07-26 01:04:44.791580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:14.405 [2024-07-26 01:04:44.791742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:14.405 [2024-07-26 01:04:44.791766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:14.405 [2024-07-26 01:04:44.791935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:14.405 [2024-07-26 01:04:44.791959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:14.405 passed 00:22:14.405 Test: blockdev nvme admin passthru ...passed 00:22:14.662 Test: blockdev copy ...passed 00:22:14.662 00:22:14.662 Run Summary: Type Total Ran Passed Failed Inactive 00:22:14.662 suites 1 1 n/a 0 0 00:22:14.662 tests 23 23 23 0 0 00:22:14.662 asserts 152 152 152 0 n/a 00:22:14.662 00:22:14.662 Elapsed time = 1.021 seconds 00:22:14.920 01:04:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:14.920 01:04:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.920 01:04:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:14.920 01:04:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.920 01:04:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:14.920 01:04:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:14.920 01:04:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:14.920 01:04:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:22:14.920 01:04:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:14.920 01:04:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:22:14.920 01:04:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:14.920 01:04:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:14.920 rmmod nvme_tcp 00:22:14.920 rmmod nvme_fabrics 00:22:14.920 rmmod nvme_keyring 00:22:14.920 01:04:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:14.920 01:04:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:22:14.920 01:04:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:22:14.920 01:04:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1859857 ']' 00:22:14.920 01:04:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1859857 00:22:14.920 01:04:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 1859857 ']' 00:22:14.920 01:04:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 1859857 00:22:14.920 01:04:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:22:14.920 01:04:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:14.920 01:04:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1859857 00:22:14.920 01:04:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:22:14.920 01:04:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:22:14.920 01:04:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1859857' 00:22:14.920 killing process with pid 1859857 00:22:14.920 01:04:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 1859857 00:22:14.920 01:04:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 1859857 00:22:15.490 01:04:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:15.490 01:04:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:15.490 01:04:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:15.490 01:04:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:15.490 01:04:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:15.490 01:04:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.490 01:04:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:15.490 01:04:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.395 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:17.395 00:22:17.395 real 0m6.300s 00:22:17.395 user 0m9.627s 00:22:17.395 sys 0m2.474s 00:22:17.395 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:17.395 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:17.395 ************************************ 00:22:17.395 END TEST nvmf_bdevio_no_huge 00:22:17.395 ************************************ 00:22:17.395 01:04:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:17.395 01:04:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:17.395 01:04:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:17.395 01:04:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:17.395 ************************************ 00:22:17.395 START TEST nvmf_tls 00:22:17.395 ************************************ 00:22:17.395 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:17.395 * Looking for test storage... 00:22:17.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:17.395 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:17.395 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:17.395 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:17.395 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:17.395 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:17.395 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:17.395 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:17.395 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:17.395 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:17.395 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:17.395 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:17.396 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:17.396 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:17.396 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:17.396 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:17.396 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:17.396 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:17.396 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:17.396 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:17.396 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:17.396 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:17.396 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:17.396 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.396 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.396 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.396 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:17.396 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.396 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:22:17.396 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:17.396 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:17.396 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:17.396 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:17.396 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:17.396 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:17.396 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:17.396 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:17.396 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:17.396 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:22:17.396 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:17.396 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:17.396 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:17.396 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:17.396 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:17.396 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.396 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:17.396 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.396 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:17.396 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:17.396 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:22:17.396 01:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:19.301 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:19.301 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:19.301 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:19.301 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:19.301 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:19.560 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:19.560 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:19.560 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:19.560 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:19.560 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:19.560 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:19.560 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:22:19.560 00:22:19.560 --- 10.0.0.2 ping statistics --- 00:22:19.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.560 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:22:19.560 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:19.560 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:19.560 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:22:19.560 00:22:19.560 --- 10.0.0.1 ping statistics --- 00:22:19.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.560 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:22:19.560 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:19.560 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:22:19.560 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:19.560 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:19.560 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:19.560 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:19.560 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:19.560 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:19.560 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:19.560 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:19.560 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:19.560 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:19.560 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:19.560 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1861952 00:22:19.560 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1861952 00:22:19.560 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:19.560 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1861952 ']' 00:22:19.560 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.560 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:19.560 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.560 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:19.560 01:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:19.560 [2024-07-26 01:04:49.863005] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:22:19.560 [2024-07-26 01:04:49.863091] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:19.560 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.560 [2024-07-26 01:04:49.933866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.818 [2024-07-26 01:04:50.031661] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:19.818 [2024-07-26 01:04:50.031727] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:19.818 [2024-07-26 01:04:50.031765] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:19.818 [2024-07-26 01:04:50.031780] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:19.818 [2024-07-26 01:04:50.031792] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:19.818 [2024-07-26 01:04:50.031820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:19.818 01:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:19.818 01:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:19.818 01:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:19.818 01:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:19.818 01:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:19.818 01:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:19.818 01:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:22:19.818 01:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:20.076 true 00:22:20.076 01:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:20.076 01:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:22:20.334 01:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:22:20.334 01:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:22:20.334 01:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:20.594 01:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:20.594 01:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:22:20.854 01:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:22:20.854 01:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:22:20.854 01:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:21.114 01:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:21.114 01:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:22:21.372 01:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:22:21.372 01:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:22:21.372 01:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:21.372 01:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:22:21.630 01:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:22:21.630 01:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:22:21.630 01:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:21.888 01:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:21.888 01:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:22:22.147 01:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:22:22.147 01:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:22:22.147 01:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:22.407 01:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:22.407 01:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:22:22.666 01:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:22:22.666 01:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:22:22.666 01:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:22.666 01:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:22.666 01:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:22.666 01:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:22.666 01:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:22.666 01:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:22.666 01:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:22.666 01:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:22.666 01:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:22.666 01:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:22.666 01:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:22.666 01:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:22.666 01:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:22:22.666 01:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:22.666 01:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:22.666 01:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:22.666 01:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:22:22.666 01:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.EE0lMiz6hs 00:22:22.666 01:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:22.666 01:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.p5ehyOI4wd 00:22:22.666 01:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:22.666 01:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:22.666 01:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.EE0lMiz6hs 00:22:22.666 01:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.p5ehyOI4wd 00:22:22.666 01:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:22.924 01:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:23.492 01:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.EE0lMiz6hs 00:22:23.492 01:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.EE0lMiz6hs 00:22:23.492 01:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:23.492 [2024-07-26 01:04:53.895478] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:23.492 01:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:24.057 01:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:24.057 [2024-07-26 01:04:54.440922] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:24.057 [2024-07-26 01:04:54.441184] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:24.057 01:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:24.317 malloc0 00:22:24.317 01:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:24.577 01:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.EE0lMiz6hs 00:22:24.837 [2024-07-26 01:04:55.187271] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:24.837 01:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.EE0lMiz6hs 00:22:24.837 EAL: No free 2048 kB hugepages reported on node 1 00:22:37.096 Initializing NVMe Controllers 00:22:37.096 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:37.096 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:37.096 Initialization complete. Launching workers. 00:22:37.096 ======================================================== 00:22:37.096 Latency(us) 00:22:37.096 Device Information : IOPS MiB/s Average min max 00:22:37.096 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7859.99 30.70 8145.17 1083.36 9903.16 00:22:37.096 ======================================================== 00:22:37.096 Total : 7859.99 30.70 8145.17 1083.36 9903.16 00:22:37.096 00:22:37.096 01:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.EE0lMiz6hs 00:22:37.096 01:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:37.096 01:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:37.096 01:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:37.096 01:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.EE0lMiz6hs' 00:22:37.096 01:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:37.096 01:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1863844 00:22:37.096 01:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:37.096 01:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1863844 /var/tmp/bdevperf.sock 00:22:37.096 01:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:37.096 01:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1863844 ']' 00:22:37.096 01:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:37.096 01:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:37.096 01:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:37.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:37.096 01:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:37.096 01:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:37.096 [2024-07-26 01:05:05.362660] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:22:37.096 [2024-07-26 01:05:05.362744] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1863844 ] 00:22:37.096 EAL: No free 2048 kB hugepages reported on node 1 00:22:37.096 [2024-07-26 01:05:05.428836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.096 [2024-07-26 01:05:05.519492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:37.096 01:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:37.096 01:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:37.096 01:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.EE0lMiz6hs 00:22:37.096 [2024-07-26 01:05:05.898084] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:37.096 [2024-07-26 01:05:05.898204] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:37.096 TLSTESTn1 00:22:37.096 01:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:37.096 Running I/O for 10 seconds... 00:22:47.083 00:22:47.083 Latency(us) 00:22:47.083 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.083 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:47.083 Verification LBA range: start 0x0 length 0x2000 00:22:47.083 TLSTESTn1 : 10.03 3210.57 12.54 0.00 0.00 39793.10 7864.32 54370.61 00:22:47.083 =================================================================================================================== 00:22:47.083 Total : 3210.57 12.54 0.00 0.00 39793.10 7864.32 54370.61 00:22:47.083 0 00:22:47.083 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:47.084 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 1863844 00:22:47.084 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1863844 ']' 00:22:47.084 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1863844 00:22:47.084 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:47.084 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:47.084 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1863844 00:22:47.084 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:47.084 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:47.084 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1863844' 00:22:47.084 killing process with pid 1863844 00:22:47.084 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1863844 00:22:47.084 Received shutdown signal, test time was about 10.000000 seconds 00:22:47.084 00:22:47.084 Latency(us) 00:22:47.084 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.084 =================================================================================================================== 00:22:47.084 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:47.084 [2024-07-26 01:05:16.179297] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:47.084 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1863844 00:22:47.084 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.p5ehyOI4wd 00:22:47.084 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:47.084 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.p5ehyOI4wd 00:22:47.084 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:47.084 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:47.084 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:47.084 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:47.084 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.p5ehyOI4wd 00:22:47.084 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:47.084 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:47.084 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:47.084 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.p5ehyOI4wd' 00:22:47.084 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:47.084 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1865160 00:22:47.084 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:47.084 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:47.084 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1865160 /var/tmp/bdevperf.sock 00:22:47.084 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1865160 ']' 00:22:47.084 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:47.084 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:47.084 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:47.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:47.084 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:47.084 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:47.084 [2024-07-26 01:05:16.439432] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:22:47.084 [2024-07-26 01:05:16.439522] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1865160 ] 00:22:47.084 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.084 [2024-07-26 01:05:16.499329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.084 [2024-07-26 01:05:16.584590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:47.084 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:47.084 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:47.084 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.p5ehyOI4wd 00:22:47.084 [2024-07-26 01:05:16.966648] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:47.084 [2024-07-26 01:05:16.966779] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:47.084 [2024-07-26 01:05:16.972263] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:47.084 [2024-07-26 01:05:16.972699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13eeab0 (107): Transport endpoint is not connected 00:22:47.084 [2024-07-26 01:05:16.973688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13eeab0 (9): Bad file descriptor 00:22:47.084 [2024-07-26 01:05:16.974687] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:47.084 [2024-07-26 01:05:16.974708] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:47.084 [2024-07-26 01:05:16.974725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:47.084 request: 00:22:47.084 { 00:22:47.084 "name": "TLSTEST", 00:22:47.084 "trtype": "tcp", 00:22:47.084 "traddr": "10.0.0.2", 00:22:47.084 "adrfam": "ipv4", 00:22:47.084 "trsvcid": "4420", 00:22:47.084 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:47.084 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:47.084 "prchk_reftag": false, 00:22:47.084 "prchk_guard": false, 00:22:47.084 "hdgst": false, 00:22:47.084 "ddgst": false, 00:22:47.084 "psk": "/tmp/tmp.p5ehyOI4wd", 00:22:47.084 "method": "bdev_nvme_attach_controller", 00:22:47.084 "req_id": 1 00:22:47.084 } 00:22:47.084 Got JSON-RPC error response 00:22:47.084 response: 00:22:47.084 { 00:22:47.084 "code": -5, 00:22:47.084 "message": "Input/output error" 00:22:47.084 } 00:22:47.084 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1865160 00:22:47.084 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1865160 ']' 00:22:47.084 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1865160 00:22:47.084 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:47.084 01:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:47.084 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1865160 00:22:47.084 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:47.084 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:47.084 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1865160' 00:22:47.084 killing process with pid 1865160 00:22:47.084 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1865160 00:22:47.084 Received shutdown signal, test time was about 10.000000 seconds 00:22:47.084 00:22:47.084 Latency(us) 00:22:47.084 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.084 =================================================================================================================== 00:22:47.084 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:47.084 [2024-07-26 01:05:17.027267] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:47.084 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1865160 00:22:47.084 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:47.084 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:47.084 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:47.084 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:47.084 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:47.084 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.EE0lMiz6hs 00:22:47.085 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:47.085 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.EE0lMiz6hs 00:22:47.085 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:47.085 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:47.085 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:47.085 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:47.085 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.EE0lMiz6hs 00:22:47.085 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:47.085 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:47.085 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:47.085 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.EE0lMiz6hs' 00:22:47.085 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:47.085 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1865173 00:22:47.085 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:47.085 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:47.085 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1865173 /var/tmp/bdevperf.sock 00:22:47.085 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1865173 ']' 00:22:47.085 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:47.085 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:47.085 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:47.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:47.085 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:47.085 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:47.085 [2024-07-26 01:05:17.294393] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:22:47.085 [2024-07-26 01:05:17.294483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1865173 ] 00:22:47.085 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.085 [2024-07-26 01:05:17.352448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.085 [2024-07-26 01:05:17.443797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:47.351 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:47.351 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:47.351 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.EE0lMiz6hs 00:22:47.611 [2024-07-26 01:05:17.831088] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:47.611 [2024-07-26 01:05:17.831217] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:47.611 [2024-07-26 01:05:17.836456] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:47.611 [2024-07-26 01:05:17.836493] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:47.611 [2024-07-26 01:05:17.836536] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:47.611 [2024-07-26 01:05:17.836997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10deab0 (107): Transport endpoint is not connected 00:22:47.611 [2024-07-26 01:05:17.837986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10deab0 (9): Bad file descriptor 00:22:47.611 [2024-07-26 01:05:17.838985] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:47.611 [2024-07-26 01:05:17.839006] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:47.611 [2024-07-26 01:05:17.839023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:47.611 request: 00:22:47.611 { 00:22:47.611 "name": "TLSTEST", 00:22:47.611 "trtype": "tcp", 00:22:47.611 "traddr": "10.0.0.2", 00:22:47.611 "adrfam": "ipv4", 00:22:47.611 "trsvcid": "4420", 00:22:47.611 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:47.611 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:47.611 "prchk_reftag": false, 00:22:47.611 "prchk_guard": false, 00:22:47.611 "hdgst": false, 00:22:47.611 "ddgst": false, 00:22:47.611 "psk": "/tmp/tmp.EE0lMiz6hs", 00:22:47.611 "method": "bdev_nvme_attach_controller", 00:22:47.611 "req_id": 1 00:22:47.611 } 00:22:47.611 Got JSON-RPC error response 00:22:47.611 response: 00:22:47.611 { 00:22:47.611 "code": -5, 00:22:47.611 "message": "Input/output error" 00:22:47.611 } 00:22:47.611 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1865173 00:22:47.611 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1865173 ']' 00:22:47.611 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1865173 00:22:47.611 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:47.611 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:47.611 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1865173 00:22:47.611 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:47.611 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:47.611 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1865173' 00:22:47.611 killing process with pid 1865173 00:22:47.611 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1865173 00:22:47.611 Received shutdown signal, test time was about 10.000000 seconds 00:22:47.611 00:22:47.611 Latency(us) 00:22:47.611 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.611 =================================================================================================================== 00:22:47.611 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:47.611 [2024-07-26 01:05:17.890815] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:47.611 01:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1865173 00:22:47.870 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:47.870 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:47.870 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:47.870 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:47.870 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:47.870 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.EE0lMiz6hs 00:22:47.870 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:47.870 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.EE0lMiz6hs 00:22:47.870 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:47.870 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:47.870 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:47.870 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:47.870 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.EE0lMiz6hs 00:22:47.870 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:47.870 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:47.870 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:47.870 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.EE0lMiz6hs' 00:22:47.870 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:47.870 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1865312 00:22:47.870 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:47.870 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:47.870 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1865312 /var/tmp/bdevperf.sock 00:22:47.870 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1865312 ']' 00:22:47.870 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:47.870 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:47.871 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:47.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:47.871 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:47.871 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:47.871 [2024-07-26 01:05:18.149769] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:22:47.871 [2024-07-26 01:05:18.149873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1865312 ] 00:22:47.871 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.871 [2024-07-26 01:05:18.206871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.871 [2024-07-26 01:05:18.289149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:48.129 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:48.129 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:48.129 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.EE0lMiz6hs 00:22:48.389 [2024-07-26 01:05:18.643624] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:48.389 [2024-07-26 01:05:18.643764] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:48.389 [2024-07-26 01:05:18.655678] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:48.389 [2024-07-26 01:05:18.655714] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:48.389 [2024-07-26 01:05:18.655770] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:48.389 [2024-07-26 01:05:18.655952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x136eab0 (107): Transport endpoint is not connected 00:22:48.389 [2024-07-26 01:05:18.656937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x136eab0 (9): Bad file descriptor 00:22:48.389 [2024-07-26 01:05:18.657936] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:48.389 [2024-07-26 01:05:18.657961] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:48.389 [2024-07-26 01:05:18.657978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:48.389 request: 00:22:48.389 { 00:22:48.389 "name": "TLSTEST", 00:22:48.389 "trtype": "tcp", 00:22:48.389 "traddr": "10.0.0.2", 00:22:48.389 "adrfam": "ipv4", 00:22:48.389 "trsvcid": "4420", 00:22:48.389 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:48.389 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:48.389 "prchk_reftag": false, 00:22:48.389 "prchk_guard": false, 00:22:48.389 "hdgst": false, 00:22:48.389 "ddgst": false, 00:22:48.389 "psk": "/tmp/tmp.EE0lMiz6hs", 00:22:48.389 "method": "bdev_nvme_attach_controller", 00:22:48.389 "req_id": 1 00:22:48.389 } 00:22:48.389 Got JSON-RPC error response 00:22:48.389 response: 00:22:48.389 { 00:22:48.389 "code": -5, 00:22:48.389 "message": "Input/output error" 00:22:48.389 } 00:22:48.389 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1865312 00:22:48.389 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1865312 ']' 00:22:48.389 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1865312 00:22:48.389 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:48.389 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:48.389 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1865312 00:22:48.389 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:48.389 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:48.389 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1865312' 00:22:48.389 killing process with pid 1865312 00:22:48.389 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1865312 00:22:48.389 Received shutdown signal, test time was about 10.000000 seconds 00:22:48.389 00:22:48.389 Latency(us) 00:22:48.389 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.389 =================================================================================================================== 00:22:48.389 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:48.389 [2024-07-26 01:05:18.709315] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:48.389 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1865312 00:22:48.648 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:48.648 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:48.648 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:48.648 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:48.648 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:48.648 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:48.648 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:48.648 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:48.648 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:48.648 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:48.648 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:48.648 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:48.648 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:48.648 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:48.648 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:48.648 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:48.648 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:48.648 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:48.648 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1865451 00:22:48.648 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:48.648 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:48.648 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1865451 /var/tmp/bdevperf.sock 00:22:48.648 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1865451 ']' 00:22:48.648 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:48.648 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:48.648 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:48.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:48.648 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:48.648 01:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:48.648 [2024-07-26 01:05:18.975366] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:22:48.648 [2024-07-26 01:05:18.975471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1865451 ] 00:22:48.648 EAL: No free 2048 kB hugepages reported on node 1 00:22:48.648 [2024-07-26 01:05:19.033443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.907 [2024-07-26 01:05:19.118427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:48.907 01:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:48.907 01:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:48.907 01:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:49.166 [2024-07-26 01:05:19.487865] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:49.166 [2024-07-26 01:05:19.489591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8c6e60 (9): Bad file descriptor 00:22:49.166 [2024-07-26 01:05:19.490584] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:49.166 [2024-07-26 01:05:19.490606] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:49.166 [2024-07-26 01:05:19.490622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:49.166 request: 00:22:49.166 { 00:22:49.166 "name": "TLSTEST", 00:22:49.166 "trtype": "tcp", 00:22:49.166 "traddr": "10.0.0.2", 00:22:49.166 "adrfam": "ipv4", 00:22:49.166 "trsvcid": "4420", 00:22:49.166 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:49.166 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:49.166 "prchk_reftag": false, 00:22:49.166 "prchk_guard": false, 00:22:49.166 "hdgst": false, 00:22:49.166 "ddgst": false, 00:22:49.166 "method": "bdev_nvme_attach_controller", 00:22:49.166 "req_id": 1 00:22:49.166 } 00:22:49.166 Got JSON-RPC error response 00:22:49.166 response: 00:22:49.166 { 00:22:49.166 "code": -5, 00:22:49.166 "message": "Input/output error" 00:22:49.166 } 00:22:49.166 01:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1865451 00:22:49.166 01:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1865451 ']' 00:22:49.166 01:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1865451 00:22:49.166 01:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:49.166 01:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:49.166 01:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1865451 00:22:49.166 01:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:49.167 01:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:49.167 01:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1865451' 00:22:49.167 killing process with pid 1865451 00:22:49.167 01:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1865451 00:22:49.167 Received shutdown signal, test time was about 10.000000 seconds 00:22:49.167 00:22:49.167 Latency(us) 00:22:49.167 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:49.167 =================================================================================================================== 00:22:49.167 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:49.167 01:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1865451 00:22:49.427 01:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:49.427 01:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:49.427 01:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:49.427 01:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:49.427 01:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:49.427 01:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 1861952 00:22:49.427 01:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1861952 ']' 00:22:49.427 01:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1861952 00:22:49.427 01:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:49.427 01:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:49.427 01:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1861952 00:22:49.427 01:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:49.427 01:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:49.427 01:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1861952' 00:22:49.427 killing process with pid 1861952 00:22:49.427 01:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1861952 00:22:49.427 [2024-07-26 01:05:19.779858] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:49.427 01:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1861952 00:22:49.686 01:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:49.686 01:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:49.686 01:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:49.686 01:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:49.686 01:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:49.686 01:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:22:49.686 01:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:49.686 01:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:49.686 01:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:22:49.686 01:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.lVO6vyF9ks 00:22:49.686 01:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:49.686 01:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.lVO6vyF9ks 00:22:49.686 01:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:22:49.686 01:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:49.686 01:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:49.686 01:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.686 01:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1865601 00:22:49.686 01:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:49.686 01:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1865601 00:22:49.686 01:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1865601 ']' 00:22:49.686 01:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.686 01:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:49.686 01:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.686 01:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:49.686 01:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.944 [2024-07-26 01:05:20.137511] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:22:49.944 [2024-07-26 01:05:20.137586] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:49.944 EAL: No free 2048 kB hugepages reported on node 1 00:22:49.944 [2024-07-26 01:05:20.203209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.944 [2024-07-26 01:05:20.292145] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:49.944 [2024-07-26 01:05:20.292212] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:49.944 [2024-07-26 01:05:20.292237] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:49.944 [2024-07-26 01:05:20.292251] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:49.944 [2024-07-26 01:05:20.292263] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:49.944 [2024-07-26 01:05:20.292300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:50.202 01:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:50.202 01:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:50.202 01:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:50.202 01:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:50.202 01:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.202 01:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:50.202 01:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.lVO6vyF9ks 00:22:50.202 01:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.lVO6vyF9ks 00:22:50.202 01:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:50.460 [2024-07-26 01:05:20.654571] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:50.460 01:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:50.720 01:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:50.981 [2024-07-26 01:05:21.155964] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:50.981 [2024-07-26 01:05:21.156232] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:50.981 01:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:51.241 malloc0 00:22:51.241 01:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:51.500 01:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lVO6vyF9ks 00:22:51.500 [2024-07-26 01:05:21.906425] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:51.500 01:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lVO6vyF9ks 00:22:51.500 01:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:51.500 01:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:51.500 01:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:51.500 01:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.lVO6vyF9ks' 00:22:51.500 01:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:51.500 01:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1865775 00:22:51.500 01:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:51.500 01:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:51.500 01:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1865775 /var/tmp/bdevperf.sock 00:22:51.500 01:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1865775 ']' 00:22:51.500 01:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:51.500 01:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:51.500 01:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:51.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:51.500 01:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:51.500 01:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.760 [2024-07-26 01:05:21.973212] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:22:51.760 [2024-07-26 01:05:21.973292] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1865775 ] 00:22:51.760 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.760 [2024-07-26 01:05:22.036572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.760 [2024-07-26 01:05:22.127529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:52.018 01:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:52.018 01:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:52.018 01:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lVO6vyF9ks 00:22:52.276 [2024-07-26 01:05:22.513942] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:52.276 [2024-07-26 01:05:22.514091] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:52.276 TLSTESTn1 00:22:52.276 01:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:52.535 Running I/O for 10 seconds... 00:23:02.517 00:23:02.517 Latency(us) 00:23:02.517 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:02.517 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:02.517 Verification LBA range: start 0x0 length 0x2000 00:23:02.517 TLSTESTn1 : 10.02 1934.73 7.56 0.00 0.00 66036.05 4927.34 60584.39 00:23:02.517 =================================================================================================================== 00:23:02.517 Total : 1934.73 7.56 0.00 0.00 66036.05 4927.34 60584.39 00:23:02.517 0 00:23:02.517 01:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:02.517 01:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 1865775 00:23:02.517 01:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1865775 ']' 00:23:02.517 01:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1865775 00:23:02.517 01:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:02.517 01:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:02.517 01:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1865775 00:23:02.517 01:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:02.517 01:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:02.517 01:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1865775' 00:23:02.517 killing process with pid 1865775 00:23:02.517 01:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1865775 00:23:02.517 Received shutdown signal, test time was about 10.000000 seconds 00:23:02.517 00:23:02.517 Latency(us) 00:23:02.517 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:02.517 =================================================================================================================== 00:23:02.517 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:02.517 [2024-07-26 01:05:32.813205] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:02.517 01:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1865775 00:23:02.775 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.lVO6vyF9ks 00:23:02.775 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lVO6vyF9ks 00:23:02.775 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:02.775 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lVO6vyF9ks 00:23:02.775 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:02.775 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:02.775 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:02.775 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:02.775 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lVO6vyF9ks 00:23:02.775 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:02.775 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:02.775 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:02.775 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.lVO6vyF9ks' 00:23:02.775 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:02.775 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1867079 00:23:02.775 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:02.775 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:02.775 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1867079 /var/tmp/bdevperf.sock 00:23:02.775 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1867079 ']' 00:23:02.775 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:02.775 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:02.775 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:02.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:02.775 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:02.775 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.775 [2024-07-26 01:05:33.090818] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:23:02.775 [2024-07-26 01:05:33.090906] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1867079 ] 00:23:02.775 EAL: No free 2048 kB hugepages reported on node 1 00:23:02.775 [2024-07-26 01:05:33.151220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.034 [2024-07-26 01:05:33.237178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:03.034 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:03.034 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:03.034 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lVO6vyF9ks 00:23:03.301 [2024-07-26 01:05:33.594969] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:03.301 [2024-07-26 01:05:33.595073] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:03.301 [2024-07-26 01:05:33.595103] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.lVO6vyF9ks 00:23:03.301 request: 00:23:03.301 { 00:23:03.301 "name": "TLSTEST", 00:23:03.301 "trtype": "tcp", 00:23:03.301 "traddr": "10.0.0.2", 00:23:03.301 "adrfam": "ipv4", 00:23:03.301 "trsvcid": "4420", 00:23:03.301 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:03.301 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:03.301 "prchk_reftag": false, 00:23:03.301 "prchk_guard": false, 00:23:03.301 "hdgst": false, 00:23:03.301 "ddgst": false, 00:23:03.301 "psk": "/tmp/tmp.lVO6vyF9ks", 00:23:03.301 "method": "bdev_nvme_attach_controller", 00:23:03.301 "req_id": 1 00:23:03.301 } 00:23:03.301 Got JSON-RPC error response 00:23:03.301 response: 00:23:03.301 { 00:23:03.301 "code": -1, 00:23:03.301 "message": "Operation not permitted" 00:23:03.301 } 00:23:03.301 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1867079 00:23:03.301 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1867079 ']' 00:23:03.301 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1867079 00:23:03.302 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:03.302 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:03.302 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1867079 00:23:03.302 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:03.302 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:03.302 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1867079' 00:23:03.302 killing process with pid 1867079 00:23:03.302 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1867079 00:23:03.302 Received shutdown signal, test time was about 10.000000 seconds 00:23:03.302 00:23:03.302 Latency(us) 00:23:03.302 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.302 =================================================================================================================== 00:23:03.302 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:03.302 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1867079 00:23:03.564 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:03.564 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:03.564 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:03.564 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:03.564 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:03.564 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 1865601 00:23:03.564 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1865601 ']' 00:23:03.564 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1865601 00:23:03.564 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:03.564 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:03.564 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1865601 00:23:03.564 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:03.564 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:03.564 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1865601' 00:23:03.564 killing process with pid 1865601 00:23:03.564 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1865601 00:23:03.564 [2024-07-26 01:05:33.888407] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:03.564 01:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1865601 00:23:03.823 01:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:23:03.823 01:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:03.823 01:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:03.823 01:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:03.823 01:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1867222 00:23:03.823 01:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:03.823 01:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1867222 00:23:03.823 01:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1867222 ']' 00:23:03.823 01:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:03.823 01:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:03.823 01:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:03.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:03.823 01:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:03.823 01:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:03.823 [2024-07-26 01:05:34.187091] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:23:03.823 [2024-07-26 01:05:34.187197] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:03.823 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.115 [2024-07-26 01:05:34.254641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.115 [2024-07-26 01:05:34.341822] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:04.115 [2024-07-26 01:05:34.341888] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:04.115 [2024-07-26 01:05:34.341915] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:04.115 [2024-07-26 01:05:34.341929] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:04.115 [2024-07-26 01:05:34.341941] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:04.115 [2024-07-26 01:05:34.341973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:04.115 01:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:04.115 01:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:04.115 01:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:04.115 01:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:04.115 01:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.115 01:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:04.115 01:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.lVO6vyF9ks 00:23:04.116 01:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:04.116 01:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.lVO6vyF9ks 00:23:04.116 01:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:23:04.116 01:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:04.116 01:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:23:04.116 01:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:04.116 01:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.lVO6vyF9ks 00:23:04.116 01:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.lVO6vyF9ks 00:23:04.116 01:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:04.374 [2024-07-26 01:05:34.733067] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:04.374 01:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:04.632 01:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:04.890 [2024-07-26 01:05:35.214370] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:04.890 [2024-07-26 01:05:35.214634] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:04.890 01:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:05.147 malloc0 00:23:05.148 01:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:05.405 01:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lVO6vyF9ks 00:23:05.662 [2024-07-26 01:05:36.048207] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:05.662 [2024-07-26 01:05:36.048252] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:23:05.662 [2024-07-26 01:05:36.048297] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:05.662 request: 00:23:05.662 { 00:23:05.662 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:05.662 "host": "nqn.2016-06.io.spdk:host1", 00:23:05.662 "psk": "/tmp/tmp.lVO6vyF9ks", 00:23:05.662 "method": "nvmf_subsystem_add_host", 00:23:05.662 "req_id": 1 00:23:05.662 } 00:23:05.662 Got JSON-RPC error response 00:23:05.662 response: 00:23:05.662 { 00:23:05.662 "code": -32603, 00:23:05.662 "message": "Internal error" 00:23:05.662 } 00:23:05.662 01:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:05.662 01:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:05.662 01:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:05.662 01:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:05.662 01:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 1867222 00:23:05.662 01:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1867222 ']' 00:23:05.662 01:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1867222 00:23:05.662 01:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:05.662 01:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:05.662 01:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1867222 00:23:05.920 01:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:05.920 01:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:05.920 01:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1867222' 00:23:05.920 killing process with pid 1867222 00:23:05.920 01:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1867222 00:23:05.920 01:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1867222 00:23:06.178 01:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.lVO6vyF9ks 00:23:06.178 01:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:23:06.178 01:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:06.178 01:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:06.178 01:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.178 01:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1867520 00:23:06.178 01:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:06.178 01:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1867520 00:23:06.178 01:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1867520 ']' 00:23:06.178 01:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.178 01:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:06.178 01:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.178 01:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:06.178 01:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.178 [2024-07-26 01:05:36.406835] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:23:06.178 [2024-07-26 01:05:36.406925] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:06.178 EAL: No free 2048 kB hugepages reported on node 1 00:23:06.178 [2024-07-26 01:05:36.481221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.178 [2024-07-26 01:05:36.570666] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:06.178 [2024-07-26 01:05:36.570723] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:06.178 [2024-07-26 01:05:36.570737] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:06.178 [2024-07-26 01:05:36.570748] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:06.178 [2024-07-26 01:05:36.570759] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:06.178 [2024-07-26 01:05:36.570785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:06.436 01:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:06.436 01:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:06.436 01:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:06.436 01:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:06.436 01:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.436 01:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:06.436 01:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.lVO6vyF9ks 00:23:06.436 01:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.lVO6vyF9ks 00:23:06.436 01:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:06.693 [2024-07-26 01:05:36.954767] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:06.693 01:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:06.950 01:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:07.207 [2024-07-26 01:05:37.448077] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:07.207 [2024-07-26 01:05:37.448320] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:07.207 01:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:07.464 malloc0 00:23:07.464 01:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:07.722 01:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lVO6vyF9ks 00:23:07.979 [2024-07-26 01:05:38.250643] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:07.979 01:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1867803 00:23:07.979 01:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:07.979 01:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:07.979 01:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1867803 /var/tmp/bdevperf.sock 00:23:07.979 01:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1867803 ']' 00:23:07.979 01:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:07.979 01:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:07.979 01:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:07.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:07.979 01:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:07.979 01:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.979 [2024-07-26 01:05:38.307445] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:23:07.979 [2024-07-26 01:05:38.307532] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1867803 ] 00:23:07.979 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.979 [2024-07-26 01:05:38.367222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.237 [2024-07-26 01:05:38.454180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:08.237 01:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:08.237 01:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:08.237 01:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lVO6vyF9ks 00:23:08.495 [2024-07-26 01:05:38.778254] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:08.495 [2024-07-26 01:05:38.778391] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:08.495 TLSTESTn1 00:23:08.495 01:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:09.060 01:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:23:09.060 "subsystems": [ 00:23:09.060 { 00:23:09.060 "subsystem": "keyring", 00:23:09.060 "config": [] 00:23:09.060 }, 00:23:09.060 { 00:23:09.060 "subsystem": "iobuf", 00:23:09.060 "config": [ 00:23:09.060 { 00:23:09.060 "method": "iobuf_set_options", 00:23:09.060 "params": { 00:23:09.060 "small_pool_count": 8192, 00:23:09.060 "large_pool_count": 1024, 00:23:09.060 "small_bufsize": 8192, 00:23:09.060 "large_bufsize": 135168 00:23:09.060 } 00:23:09.060 } 00:23:09.060 ] 00:23:09.060 }, 00:23:09.060 { 00:23:09.060 "subsystem": "sock", 00:23:09.060 "config": [ 00:23:09.060 { 00:23:09.060 "method": "sock_set_default_impl", 00:23:09.060 "params": { 00:23:09.060 "impl_name": "posix" 00:23:09.060 } 00:23:09.060 }, 00:23:09.060 { 00:23:09.060 "method": "sock_impl_set_options", 00:23:09.060 "params": { 00:23:09.060 "impl_name": "ssl", 00:23:09.060 "recv_buf_size": 4096, 00:23:09.060 "send_buf_size": 4096, 00:23:09.060 "enable_recv_pipe": true, 00:23:09.060 "enable_quickack": false, 00:23:09.060 "enable_placement_id": 0, 00:23:09.060 "enable_zerocopy_send_server": true, 00:23:09.060 "enable_zerocopy_send_client": false, 00:23:09.060 "zerocopy_threshold": 0, 00:23:09.060 "tls_version": 0, 00:23:09.060 "enable_ktls": false 00:23:09.060 } 00:23:09.060 }, 00:23:09.060 { 00:23:09.060 "method": "sock_impl_set_options", 00:23:09.060 "params": { 00:23:09.060 "impl_name": "posix", 00:23:09.060 "recv_buf_size": 2097152, 00:23:09.060 "send_buf_size": 2097152, 00:23:09.060 "enable_recv_pipe": true, 00:23:09.060 "enable_quickack": false, 00:23:09.060 "enable_placement_id": 0, 00:23:09.060 "enable_zerocopy_send_server": true, 00:23:09.060 "enable_zerocopy_send_client": false, 00:23:09.060 "zerocopy_threshold": 0, 00:23:09.060 "tls_version": 0, 00:23:09.061 "enable_ktls": false 00:23:09.061 } 00:23:09.061 } 00:23:09.061 ] 00:23:09.061 }, 00:23:09.061 { 00:23:09.061 "subsystem": "vmd", 00:23:09.061 "config": [] 00:23:09.061 }, 00:23:09.061 { 00:23:09.061 "subsystem": "accel", 00:23:09.061 "config": [ 00:23:09.061 { 00:23:09.061 "method": "accel_set_options", 00:23:09.061 "params": { 00:23:09.061 "small_cache_size": 128, 00:23:09.061 "large_cache_size": 16, 00:23:09.061 "task_count": 2048, 00:23:09.061 "sequence_count": 2048, 00:23:09.061 "buf_count": 2048 00:23:09.061 } 00:23:09.061 } 00:23:09.061 ] 00:23:09.061 }, 00:23:09.061 { 00:23:09.061 "subsystem": "bdev", 00:23:09.061 "config": [ 00:23:09.061 { 00:23:09.061 "method": "bdev_set_options", 00:23:09.061 "params": { 00:23:09.061 "bdev_io_pool_size": 65535, 00:23:09.061 "bdev_io_cache_size": 256, 00:23:09.061 "bdev_auto_examine": true, 00:23:09.061 "iobuf_small_cache_size": 128, 00:23:09.061 "iobuf_large_cache_size": 16 00:23:09.061 } 00:23:09.061 }, 00:23:09.061 { 00:23:09.061 "method": "bdev_raid_set_options", 00:23:09.061 "params": { 00:23:09.061 "process_window_size_kb": 1024, 00:23:09.061 "process_max_bandwidth_mb_sec": 0 00:23:09.061 } 00:23:09.061 }, 00:23:09.061 { 00:23:09.061 "method": "bdev_iscsi_set_options", 00:23:09.061 "params": { 00:23:09.061 "timeout_sec": 30 00:23:09.061 } 00:23:09.061 }, 00:23:09.061 { 00:23:09.061 "method": "bdev_nvme_set_options", 00:23:09.061 "params": { 00:23:09.061 "action_on_timeout": "none", 00:23:09.061 "timeout_us": 0, 00:23:09.061 "timeout_admin_us": 0, 00:23:09.061 "keep_alive_timeout_ms": 10000, 00:23:09.061 "arbitration_burst": 0, 00:23:09.061 "low_priority_weight": 0, 00:23:09.061 "medium_priority_weight": 0, 00:23:09.061 "high_priority_weight": 0, 00:23:09.061 "nvme_adminq_poll_period_us": 10000, 00:23:09.061 "nvme_ioq_poll_period_us": 0, 00:23:09.061 "io_queue_requests": 0, 00:23:09.061 "delay_cmd_submit": true, 00:23:09.061 "transport_retry_count": 4, 00:23:09.061 "bdev_retry_count": 3, 00:23:09.061 "transport_ack_timeout": 0, 00:23:09.061 "ctrlr_loss_timeout_sec": 0, 00:23:09.061 "reconnect_delay_sec": 0, 00:23:09.061 "fast_io_fail_timeout_sec": 0, 00:23:09.061 "disable_auto_failback": false, 00:23:09.061 "generate_uuids": false, 00:23:09.061 "transport_tos": 0, 00:23:09.061 "nvme_error_stat": false, 00:23:09.061 "rdma_srq_size": 0, 00:23:09.061 "io_path_stat": false, 00:23:09.061 "allow_accel_sequence": false, 00:23:09.061 "rdma_max_cq_size": 0, 00:23:09.061 "rdma_cm_event_timeout_ms": 0, 00:23:09.061 "dhchap_digests": [ 00:23:09.061 "sha256", 00:23:09.061 "sha384", 00:23:09.061 "sha512" 00:23:09.061 ], 00:23:09.061 "dhchap_dhgroups": [ 00:23:09.061 "null", 00:23:09.061 "ffdhe2048", 00:23:09.061 "ffdhe3072", 00:23:09.061 "ffdhe4096", 00:23:09.061 "ffdhe6144", 00:23:09.061 "ffdhe8192" 00:23:09.061 ] 00:23:09.061 } 00:23:09.061 }, 00:23:09.061 { 00:23:09.061 "method": "bdev_nvme_set_hotplug", 00:23:09.061 "params": { 00:23:09.061 "period_us": 100000, 00:23:09.061 "enable": false 00:23:09.061 } 00:23:09.061 }, 00:23:09.061 { 00:23:09.061 "method": "bdev_malloc_create", 00:23:09.061 "params": { 00:23:09.061 "name": "malloc0", 00:23:09.061 "num_blocks": 8192, 00:23:09.061 "block_size": 4096, 00:23:09.061 "physical_block_size": 4096, 00:23:09.061 "uuid": "8986f209-6922-4a59-b5a8-a0684812a404", 00:23:09.061 "optimal_io_boundary": 0, 00:23:09.061 "md_size": 0, 00:23:09.061 "dif_type": 0, 00:23:09.061 "dif_is_head_of_md": false, 00:23:09.061 "dif_pi_format": 0 00:23:09.061 } 00:23:09.061 }, 00:23:09.061 { 00:23:09.061 "method": "bdev_wait_for_examine" 00:23:09.061 } 00:23:09.061 ] 00:23:09.061 }, 00:23:09.061 { 00:23:09.061 "subsystem": "nbd", 00:23:09.061 "config": [] 00:23:09.061 }, 00:23:09.061 { 00:23:09.061 "subsystem": "scheduler", 00:23:09.061 "config": [ 00:23:09.061 { 00:23:09.061 "method": "framework_set_scheduler", 00:23:09.061 "params": { 00:23:09.061 "name": "static" 00:23:09.061 } 00:23:09.061 } 00:23:09.061 ] 00:23:09.061 }, 00:23:09.061 { 00:23:09.061 "subsystem": "nvmf", 00:23:09.061 "config": [ 00:23:09.061 { 00:23:09.061 "method": "nvmf_set_config", 00:23:09.061 "params": { 00:23:09.061 "discovery_filter": "match_any", 00:23:09.061 "admin_cmd_passthru": { 00:23:09.061 "identify_ctrlr": false 00:23:09.061 } 00:23:09.061 } 00:23:09.061 }, 00:23:09.061 { 00:23:09.061 "method": "nvmf_set_max_subsystems", 00:23:09.061 "params": { 00:23:09.061 "max_subsystems": 1024 00:23:09.061 } 00:23:09.061 }, 00:23:09.061 { 00:23:09.061 "method": "nvmf_set_crdt", 00:23:09.061 "params": { 00:23:09.061 "crdt1": 0, 00:23:09.061 "crdt2": 0, 00:23:09.061 "crdt3": 0 00:23:09.061 } 00:23:09.061 }, 00:23:09.061 { 00:23:09.061 "method": "nvmf_create_transport", 00:23:09.061 "params": { 00:23:09.061 "trtype": "TCP", 00:23:09.061 "max_queue_depth": 128, 00:23:09.061 "max_io_qpairs_per_ctrlr": 127, 00:23:09.061 "in_capsule_data_size": 4096, 00:23:09.061 "max_io_size": 131072, 00:23:09.061 "io_unit_size": 131072, 00:23:09.061 "max_aq_depth": 128, 00:23:09.061 "num_shared_buffers": 511, 00:23:09.061 "buf_cache_size": 4294967295, 00:23:09.061 "dif_insert_or_strip": false, 00:23:09.061 "zcopy": false, 00:23:09.061 "c2h_success": false, 00:23:09.061 "sock_priority": 0, 00:23:09.061 "abort_timeout_sec": 1, 00:23:09.061 "ack_timeout": 0, 00:23:09.061 "data_wr_pool_size": 0 00:23:09.061 } 00:23:09.061 }, 00:23:09.061 { 00:23:09.061 "method": "nvmf_create_subsystem", 00:23:09.061 "params": { 00:23:09.061 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.061 "allow_any_host": false, 00:23:09.061 "serial_number": "SPDK00000000000001", 00:23:09.061 "model_number": "SPDK bdev Controller", 00:23:09.061 "max_namespaces": 10, 00:23:09.061 "min_cntlid": 1, 00:23:09.061 "max_cntlid": 65519, 00:23:09.061 "ana_reporting": false 00:23:09.061 } 00:23:09.061 }, 00:23:09.061 { 00:23:09.061 "method": "nvmf_subsystem_add_host", 00:23:09.061 "params": { 00:23:09.061 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.061 "host": "nqn.2016-06.io.spdk:host1", 00:23:09.061 "psk": "/tmp/tmp.lVO6vyF9ks" 00:23:09.061 } 00:23:09.061 }, 00:23:09.061 { 00:23:09.061 "method": "nvmf_subsystem_add_ns", 00:23:09.061 "params": { 00:23:09.061 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.061 "namespace": { 00:23:09.061 "nsid": 1, 00:23:09.061 "bdev_name": "malloc0", 00:23:09.061 "nguid": "8986F20969224A59B5A8A0684812A404", 00:23:09.061 "uuid": "8986f209-6922-4a59-b5a8-a0684812a404", 00:23:09.061 "no_auto_visible": false 00:23:09.061 } 00:23:09.061 } 00:23:09.061 }, 00:23:09.061 { 00:23:09.061 "method": "nvmf_subsystem_add_listener", 00:23:09.061 "params": { 00:23:09.061 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.061 "listen_address": { 00:23:09.061 "trtype": "TCP", 00:23:09.061 "adrfam": "IPv4", 00:23:09.061 "traddr": "10.0.0.2", 00:23:09.061 "trsvcid": "4420" 00:23:09.061 }, 00:23:09.061 "secure_channel": true 00:23:09.061 } 00:23:09.061 } 00:23:09.061 ] 00:23:09.061 } 00:23:09.061 ] 00:23:09.061 }' 00:23:09.062 01:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:09.320 01:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:23:09.320 "subsystems": [ 00:23:09.320 { 00:23:09.320 "subsystem": "keyring", 00:23:09.320 "config": [] 00:23:09.320 }, 00:23:09.320 { 00:23:09.320 "subsystem": "iobuf", 00:23:09.320 "config": [ 00:23:09.320 { 00:23:09.320 "method": "iobuf_set_options", 00:23:09.320 "params": { 00:23:09.320 "small_pool_count": 8192, 00:23:09.320 "large_pool_count": 1024, 00:23:09.320 "small_bufsize": 8192, 00:23:09.320 "large_bufsize": 135168 00:23:09.320 } 00:23:09.320 } 00:23:09.320 ] 00:23:09.320 }, 00:23:09.320 { 00:23:09.320 "subsystem": "sock", 00:23:09.320 "config": [ 00:23:09.320 { 00:23:09.320 "method": "sock_set_default_impl", 00:23:09.320 "params": { 00:23:09.320 "impl_name": "posix" 00:23:09.320 } 00:23:09.320 }, 00:23:09.320 { 00:23:09.320 "method": "sock_impl_set_options", 00:23:09.320 "params": { 00:23:09.320 "impl_name": "ssl", 00:23:09.320 "recv_buf_size": 4096, 00:23:09.320 "send_buf_size": 4096, 00:23:09.320 "enable_recv_pipe": true, 00:23:09.320 "enable_quickack": false, 00:23:09.320 "enable_placement_id": 0, 00:23:09.320 "enable_zerocopy_send_server": true, 00:23:09.320 "enable_zerocopy_send_client": false, 00:23:09.320 "zerocopy_threshold": 0, 00:23:09.320 "tls_version": 0, 00:23:09.320 "enable_ktls": false 00:23:09.320 } 00:23:09.320 }, 00:23:09.320 { 00:23:09.320 "method": "sock_impl_set_options", 00:23:09.320 "params": { 00:23:09.320 "impl_name": "posix", 00:23:09.320 "recv_buf_size": 2097152, 00:23:09.320 "send_buf_size": 2097152, 00:23:09.320 "enable_recv_pipe": true, 00:23:09.320 "enable_quickack": false, 00:23:09.320 "enable_placement_id": 0, 00:23:09.320 "enable_zerocopy_send_server": true, 00:23:09.320 "enable_zerocopy_send_client": false, 00:23:09.320 "zerocopy_threshold": 0, 00:23:09.320 "tls_version": 0, 00:23:09.320 "enable_ktls": false 00:23:09.320 } 00:23:09.320 } 00:23:09.320 ] 00:23:09.320 }, 00:23:09.320 { 00:23:09.320 "subsystem": "vmd", 00:23:09.320 "config": [] 00:23:09.320 }, 00:23:09.320 { 00:23:09.320 "subsystem": "accel", 00:23:09.320 "config": [ 00:23:09.320 { 00:23:09.320 "method": "accel_set_options", 00:23:09.320 "params": { 00:23:09.320 "small_cache_size": 128, 00:23:09.320 "large_cache_size": 16, 00:23:09.320 "task_count": 2048, 00:23:09.320 "sequence_count": 2048, 00:23:09.320 "buf_count": 2048 00:23:09.320 } 00:23:09.320 } 00:23:09.320 ] 00:23:09.320 }, 00:23:09.320 { 00:23:09.320 "subsystem": "bdev", 00:23:09.320 "config": [ 00:23:09.320 { 00:23:09.320 "method": "bdev_set_options", 00:23:09.320 "params": { 00:23:09.320 "bdev_io_pool_size": 65535, 00:23:09.320 "bdev_io_cache_size": 256, 00:23:09.320 "bdev_auto_examine": true, 00:23:09.320 "iobuf_small_cache_size": 128, 00:23:09.320 "iobuf_large_cache_size": 16 00:23:09.320 } 00:23:09.320 }, 00:23:09.320 { 00:23:09.320 "method": "bdev_raid_set_options", 00:23:09.320 "params": { 00:23:09.320 "process_window_size_kb": 1024, 00:23:09.320 "process_max_bandwidth_mb_sec": 0 00:23:09.320 } 00:23:09.320 }, 00:23:09.320 { 00:23:09.320 "method": "bdev_iscsi_set_options", 00:23:09.321 "params": { 00:23:09.321 "timeout_sec": 30 00:23:09.321 } 00:23:09.321 }, 00:23:09.321 { 00:23:09.321 "method": "bdev_nvme_set_options", 00:23:09.321 "params": { 00:23:09.321 "action_on_timeout": "none", 00:23:09.321 "timeout_us": 0, 00:23:09.321 "timeout_admin_us": 0, 00:23:09.321 "keep_alive_timeout_ms": 10000, 00:23:09.321 "arbitration_burst": 0, 00:23:09.321 "low_priority_weight": 0, 00:23:09.321 "medium_priority_weight": 0, 00:23:09.321 "high_priority_weight": 0, 00:23:09.321 "nvme_adminq_poll_period_us": 10000, 00:23:09.321 "nvme_ioq_poll_period_us": 0, 00:23:09.321 "io_queue_requests": 512, 00:23:09.321 "delay_cmd_submit": true, 00:23:09.321 "transport_retry_count": 4, 00:23:09.321 "bdev_retry_count": 3, 00:23:09.321 "transport_ack_timeout": 0, 00:23:09.321 "ctrlr_loss_timeout_sec": 0, 00:23:09.321 "reconnect_delay_sec": 0, 00:23:09.321 "fast_io_fail_timeout_sec": 0, 00:23:09.321 "disable_auto_failback": false, 00:23:09.321 "generate_uuids": false, 00:23:09.321 "transport_tos": 0, 00:23:09.321 "nvme_error_stat": false, 00:23:09.321 "rdma_srq_size": 0, 00:23:09.321 "io_path_stat": false, 00:23:09.321 "allow_accel_sequence": false, 00:23:09.321 "rdma_max_cq_size": 0, 00:23:09.321 "rdma_cm_event_timeout_ms": 0, 00:23:09.321 "dhchap_digests": [ 00:23:09.321 "sha256", 00:23:09.321 "sha384", 00:23:09.321 "sha512" 00:23:09.321 ], 00:23:09.321 "dhchap_dhgroups": [ 00:23:09.321 "null", 00:23:09.321 "ffdhe2048", 00:23:09.321 "ffdhe3072", 00:23:09.321 "ffdhe4096", 00:23:09.321 "ffdhe6144", 00:23:09.321 "ffdhe8192" 00:23:09.321 ] 00:23:09.321 } 00:23:09.321 }, 00:23:09.321 { 00:23:09.321 "method": "bdev_nvme_attach_controller", 00:23:09.321 "params": { 00:23:09.321 "name": "TLSTEST", 00:23:09.321 "trtype": "TCP", 00:23:09.321 "adrfam": "IPv4", 00:23:09.321 "traddr": "10.0.0.2", 00:23:09.321 "trsvcid": "4420", 00:23:09.321 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.321 "prchk_reftag": false, 00:23:09.321 "prchk_guard": false, 00:23:09.321 "ctrlr_loss_timeout_sec": 0, 00:23:09.321 "reconnect_delay_sec": 0, 00:23:09.321 "fast_io_fail_timeout_sec": 0, 00:23:09.321 "psk": "/tmp/tmp.lVO6vyF9ks", 00:23:09.321 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:09.321 "hdgst": false, 00:23:09.321 "ddgst": false 00:23:09.321 } 00:23:09.321 }, 00:23:09.321 { 00:23:09.321 "method": "bdev_nvme_set_hotplug", 00:23:09.321 "params": { 00:23:09.321 "period_us": 100000, 00:23:09.321 "enable": false 00:23:09.321 } 00:23:09.321 }, 00:23:09.321 { 00:23:09.321 "method": "bdev_wait_for_examine" 00:23:09.321 } 00:23:09.321 ] 00:23:09.321 }, 00:23:09.321 { 00:23:09.321 "subsystem": "nbd", 00:23:09.321 "config": [] 00:23:09.321 } 00:23:09.321 ] 00:23:09.321 }' 00:23:09.321 01:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 1867803 00:23:09.321 01:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1867803 ']' 00:23:09.321 01:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1867803 00:23:09.321 01:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:09.321 01:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:09.321 01:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1867803 00:23:09.321 01:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:09.321 01:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:09.321 01:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1867803' 00:23:09.321 killing process with pid 1867803 00:23:09.321 01:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1867803 00:23:09.321 Received shutdown signal, test time was about 10.000000 seconds 00:23:09.321 00:23:09.321 Latency(us) 00:23:09.321 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.321 =================================================================================================================== 00:23:09.321 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:09.321 [2024-07-26 01:05:39.552254] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:09.321 01:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1867803 00:23:09.579 01:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 1867520 00:23:09.579 01:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1867520 ']' 00:23:09.579 01:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1867520 00:23:09.579 01:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:09.579 01:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:09.579 01:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1867520 00:23:09.579 01:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:09.579 01:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:09.579 01:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1867520' 00:23:09.579 killing process with pid 1867520 00:23:09.579 01:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1867520 00:23:09.579 [2024-07-26 01:05:39.803000] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:09.579 01:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1867520 00:23:09.837 01:05:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:09.837 01:05:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:23:09.837 "subsystems": [ 00:23:09.837 { 00:23:09.837 "subsystem": "keyring", 00:23:09.837 "config": [] 00:23:09.837 }, 00:23:09.837 { 00:23:09.837 "subsystem": "iobuf", 00:23:09.837 "config": [ 00:23:09.837 { 00:23:09.837 "method": "iobuf_set_options", 00:23:09.837 "params": { 00:23:09.837 "small_pool_count": 8192, 00:23:09.837 "large_pool_count": 1024, 00:23:09.837 "small_bufsize": 8192, 00:23:09.837 "large_bufsize": 135168 00:23:09.837 } 00:23:09.837 } 00:23:09.837 ] 00:23:09.837 }, 00:23:09.837 { 00:23:09.837 "subsystem": "sock", 00:23:09.837 "config": [ 00:23:09.837 { 00:23:09.837 "method": "sock_set_default_impl", 00:23:09.837 "params": { 00:23:09.837 "impl_name": "posix" 00:23:09.837 } 00:23:09.837 }, 00:23:09.837 { 00:23:09.837 "method": "sock_impl_set_options", 00:23:09.837 "params": { 00:23:09.837 "impl_name": "ssl", 00:23:09.837 "recv_buf_size": 4096, 00:23:09.837 "send_buf_size": 4096, 00:23:09.837 "enable_recv_pipe": true, 00:23:09.837 "enable_quickack": false, 00:23:09.837 "enable_placement_id": 0, 00:23:09.837 "enable_zerocopy_send_server": true, 00:23:09.837 "enable_zerocopy_send_client": false, 00:23:09.837 "zerocopy_threshold": 0, 00:23:09.837 "tls_version": 0, 00:23:09.837 "enable_ktls": false 00:23:09.837 } 00:23:09.837 }, 00:23:09.837 { 00:23:09.837 "method": "sock_impl_set_options", 00:23:09.837 "params": { 00:23:09.837 "impl_name": "posix", 00:23:09.837 "recv_buf_size": 2097152, 00:23:09.837 "send_buf_size": 2097152, 00:23:09.837 "enable_recv_pipe": true, 00:23:09.837 "enable_quickack": false, 00:23:09.837 "enable_placement_id": 0, 00:23:09.837 "enable_zerocopy_send_server": true, 00:23:09.837 "enable_zerocopy_send_client": false, 00:23:09.837 "zerocopy_threshold": 0, 00:23:09.837 "tls_version": 0, 00:23:09.837 "enable_ktls": false 00:23:09.837 } 00:23:09.837 } 00:23:09.837 ] 00:23:09.837 }, 00:23:09.837 { 00:23:09.837 "subsystem": "vmd", 00:23:09.837 "config": [] 00:23:09.837 }, 00:23:09.837 { 00:23:09.837 "subsystem": "accel", 00:23:09.837 "config": [ 00:23:09.837 { 00:23:09.837 "method": "accel_set_options", 00:23:09.837 "params": { 00:23:09.837 "small_cache_size": 128, 00:23:09.837 "large_cache_size": 16, 00:23:09.837 "task_count": 2048, 00:23:09.837 "sequence_count": 2048, 00:23:09.837 "buf_count": 2048 00:23:09.837 } 00:23:09.837 } 00:23:09.837 ] 00:23:09.837 }, 00:23:09.837 { 00:23:09.837 "subsystem": "bdev", 00:23:09.837 "config": [ 00:23:09.837 { 00:23:09.837 "method": "bdev_set_options", 00:23:09.837 "params": { 00:23:09.837 "bdev_io_pool_size": 65535, 00:23:09.837 "bdev_io_cache_size": 256, 00:23:09.837 "bdev_auto_examine": true, 00:23:09.837 "iobuf_small_cache_size": 128, 00:23:09.837 "iobuf_large_cache_size": 16 00:23:09.837 } 00:23:09.837 }, 00:23:09.837 { 00:23:09.837 "method": "bdev_raid_set_options", 00:23:09.837 "params": { 00:23:09.837 "process_window_size_kb": 1024, 00:23:09.837 "process_max_bandwidth_mb_sec": 0 00:23:09.837 } 00:23:09.837 }, 00:23:09.837 { 00:23:09.837 "method": "bdev_iscsi_set_options", 00:23:09.837 "params": { 00:23:09.837 "timeout_sec": 30 00:23:09.837 } 00:23:09.837 }, 00:23:09.837 { 00:23:09.837 "method": "bdev_nvme_set_options", 00:23:09.837 "params": { 00:23:09.837 "action_on_timeout": "none", 00:23:09.837 "timeout_us": 0, 00:23:09.837 "timeout_admin_us": 0, 00:23:09.837 "keep_alive_timeout_ms": 10000, 00:23:09.837 "arbitration_burst": 0, 00:23:09.837 "low_priority_weight": 0, 00:23:09.837 "medium_priority_weight": 0, 00:23:09.837 "high_priority_weight": 0, 00:23:09.837 "nvme_adminq_poll_period_us": 10000, 00:23:09.837 "nvme_ioq_poll_period_us": 0, 00:23:09.837 "io_queue_requests": 0, 00:23:09.837 "delay_cmd_submit": true, 00:23:09.837 "transport_retry_count": 4, 00:23:09.837 "bdev_retry_count": 3, 00:23:09.837 "transport_ack_timeout": 0, 00:23:09.837 "ctrlr_loss_timeout_sec": 0, 00:23:09.837 "reconnect_delay_sec": 0, 00:23:09.837 "fast_io_fail_timeout_sec": 0, 00:23:09.837 "disable_auto_failback": false, 00:23:09.837 "generate_uuids": false, 00:23:09.837 "transport_tos": 0, 00:23:09.837 "nvme_error_stat": false, 00:23:09.837 "rdma_srq_size": 0, 00:23:09.837 "io_path_stat": false, 00:23:09.837 "allow_accel_sequence": false, 00:23:09.837 "rdma_max_cq_size": 0, 00:23:09.837 "rdma_cm_event_timeout_ms": 0, 00:23:09.837 "dhchap_digests": [ 00:23:09.837 "sha256", 00:23:09.837 "sha384", 00:23:09.837 "sha512" 00:23:09.837 ], 00:23:09.837 "dhchap_dhgroups": [ 00:23:09.837 "null", 00:23:09.837 "ffdhe2048", 00:23:09.837 "ffdhe3072", 00:23:09.837 "ffdhe4096", 00:23:09.837 "ffdhe6144", 00:23:09.837 "ffdhe8192" 00:23:09.837 ] 00:23:09.837 } 00:23:09.838 }, 00:23:09.838 { 00:23:09.838 "method": "bdev_nvme_set_hotplug", 00:23:09.838 "params": { 00:23:09.838 "period_us": 100000, 00:23:09.838 "enable": false 00:23:09.838 } 00:23:09.838 }, 00:23:09.838 { 00:23:09.838 "method": "bdev_malloc_create", 00:23:09.838 "params": { 00:23:09.838 "name": "malloc0", 00:23:09.838 "num_blocks": 8192, 00:23:09.838 "block_size": 4096, 00:23:09.838 "physical_block_size": 4096, 00:23:09.838 "uuid": "8986f209-6922-4a59-b5a8-a0684812a404", 00:23:09.838 "optimal_io_boundary": 0, 00:23:09.838 "md_size": 0, 00:23:09.838 "dif_type": 0, 00:23:09.838 "dif_is_head_of_md": false, 00:23:09.838 "dif_pi_format": 0 00:23:09.838 } 00:23:09.838 }, 00:23:09.838 { 00:23:09.838 "method": "bdev_wait_for_examine" 00:23:09.838 } 00:23:09.838 ] 00:23:09.838 }, 00:23:09.838 { 00:23:09.838 "subsystem": "nbd", 00:23:09.838 "config": [] 00:23:09.838 }, 00:23:09.838 { 00:23:09.838 "subsystem": "scheduler", 00:23:09.838 "config": [ 00:23:09.838 { 00:23:09.838 "method": "framework_set_scheduler", 00:23:09.838 "params": { 00:23:09.838 "name": "static" 00:23:09.838 } 00:23:09.838 } 00:23:09.838 ] 00:23:09.838 }, 00:23:09.838 { 00:23:09.838 "subsystem": "nvmf", 00:23:09.838 "config": [ 00:23:09.838 { 00:23:09.838 "method": "nvmf_set_config", 00:23:09.838 "params": { 00:23:09.838 "discovery_filter": "match_any", 00:23:09.838 "admin_cmd_passthru": { 00:23:09.838 "identify_ctrlr": false 00:23:09.838 } 00:23:09.838 } 00:23:09.838 }, 00:23:09.838 { 00:23:09.838 "method": "nvmf_set_max_subsystems", 00:23:09.838 "params": { 00:23:09.838 "max_subsystems": 1024 00:23:09.838 } 00:23:09.838 }, 00:23:09.838 { 00:23:09.838 "method": "nvmf_set_crdt", 00:23:09.838 "params": { 00:23:09.838 "crdt1": 0, 00:23:09.838 "crdt2": 0, 00:23:09.838 "crdt3": 0 00:23:09.838 } 00:23:09.838 }, 00:23:09.838 { 00:23:09.838 "method": "nvmf_create_transport", 00:23:09.838 "params": { 00:23:09.838 "trtype": "TCP", 00:23:09.838 "max_queue_depth": 128, 00:23:09.838 "max_io_qpairs_per_ctrlr": 127, 00:23:09.838 "in_capsule_data_size": 4096, 00:23:09.838 "max_io_size": 131072, 00:23:09.838 "io_unit_size": 131072, 00:23:09.838 "max_aq_depth": 128, 00:23:09.838 "num_shared_buffers": 511, 00:23:09.838 "buf_cache_size": 4294967295, 00:23:09.838 "dif_insert_or_strip": false, 00:23:09.838 "zcopy": false, 00:23:09.838 "c2h_success": false, 00:23:09.838 "sock_priority": 0, 00:23:09.838 "abort_timeout_sec": 1, 00:23:09.838 "ack_timeout": 0, 00:23:09.838 "data_wr_pool_size": 0 00:23:09.838 } 00:23:09.838 }, 00:23:09.838 { 00:23:09.838 "method": "nvmf_create_subsystem", 00:23:09.838 "params": { 00:23:09.838 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.838 "allow_any_host": false, 00:23:09.838 "serial_number": "SPDK00000000000001", 00:23:09.838 "model_number": "SPDK bdev Controller", 00:23:09.838 "max_namespaces": 10, 00:23:09.838 "min_cntlid": 1, 00:23:09.838 "max_cntlid": 65519, 00:23:09.838 "ana_reporting": false 00:23:09.838 } 00:23:09.838 }, 00:23:09.838 { 00:23:09.838 "method": "nvmf_subsystem_add_host", 00:23:09.838 "params": { 00:23:09.838 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.838 "host": "nqn.2016-06.io.spdk:host1", 00:23:09.838 "psk": "/tmp/tmp.lVO6vyF9ks" 00:23:09.838 } 00:23:09.838 }, 00:23:09.838 { 00:23:09.838 "method": "nvmf_subsystem_add_ns", 00:23:09.838 "params": { 00:23:09.838 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.838 "namespace": { 00:23:09.838 "nsid": 1, 00:23:09.838 "bdev_name": "malloc0", 00:23:09.838 "nguid": "8986F20969224A59B5A8A0684812A404", 00:23:09.838 "uuid": "8986f209-6922-4a59-b5a8-a0684812a404", 00:23:09.838 "no_auto_visible": false 00:23:09.838 } 00:23:09.838 } 00:23:09.838 }, 00:23:09.838 { 00:23:09.838 "method": "nvmf_subsystem_add_listener", 00:23:09.838 "params": { 00:23:09.838 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.838 "listen_address": { 00:23:09.838 "trtype": "TCP", 00:23:09.838 "adrfam": "IPv4", 00:23:09.838 "traddr": "10.0.0.2", 00:23:09.838 "trsvcid": "4420" 00:23:09.838 }, 00:23:09.838 "secure_channel": true 00:23:09.838 } 00:23:09.838 } 00:23:09.838 ] 00:23:09.838 } 00:23:09.838 ] 00:23:09.838 }' 00:23:09.838 01:05:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:09.838 01:05:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:09.838 01:05:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.838 01:05:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1867961 00:23:09.838 01:05:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:09.838 01:05:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1867961 00:23:09.838 01:05:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1867961 ']' 00:23:09.838 01:05:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.838 01:05:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:09.838 01:05:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.838 01:05:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:09.838 01:05:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.838 [2024-07-26 01:05:40.094807] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:23:09.838 [2024-07-26 01:05:40.094901] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.838 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.838 [2024-07-26 01:05:40.160585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.838 [2024-07-26 01:05:40.244800] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.838 [2024-07-26 01:05:40.244855] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.838 [2024-07-26 01:05:40.244868] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:09.838 [2024-07-26 01:05:40.244880] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:09.838 [2024-07-26 01:05:40.244890] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.838 [2024-07-26 01:05:40.244961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.096 [2024-07-26 01:05:40.470508] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:10.096 [2024-07-26 01:05:40.491389] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:10.096 [2024-07-26 01:05:40.507452] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:10.096 [2024-07-26 01:05:40.507692] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:10.662 01:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:10.662 01:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:10.662 01:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:10.662 01:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:10.662 01:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.921 01:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:10.921 01:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1868109 00:23:10.921 01:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1868109 /var/tmp/bdevperf.sock 00:23:10.921 01:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1868109 ']' 00:23:10.921 01:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:10.921 01:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:10.921 01:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:10.921 01:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:23:10.921 "subsystems": [ 00:23:10.921 { 00:23:10.921 "subsystem": "keyring", 00:23:10.921 "config": [] 00:23:10.921 }, 00:23:10.921 { 00:23:10.921 "subsystem": "iobuf", 00:23:10.921 "config": [ 00:23:10.921 { 00:23:10.921 "method": "iobuf_set_options", 00:23:10.921 "params": { 00:23:10.921 "small_pool_count": 8192, 00:23:10.921 "large_pool_count": 1024, 00:23:10.921 "small_bufsize": 8192, 00:23:10.921 "large_bufsize": 135168 00:23:10.921 } 00:23:10.921 } 00:23:10.921 ] 00:23:10.921 }, 00:23:10.921 { 00:23:10.921 "subsystem": "sock", 00:23:10.921 "config": [ 00:23:10.921 { 00:23:10.921 "method": "sock_set_default_impl", 00:23:10.921 "params": { 00:23:10.921 "impl_name": "posix" 00:23:10.921 } 00:23:10.921 }, 00:23:10.921 { 00:23:10.921 "method": "sock_impl_set_options", 00:23:10.921 "params": { 00:23:10.921 "impl_name": "ssl", 00:23:10.921 "recv_buf_size": 4096, 00:23:10.921 "send_buf_size": 4096, 00:23:10.921 "enable_recv_pipe": true, 00:23:10.921 "enable_quickack": false, 00:23:10.921 "enable_placement_id": 0, 00:23:10.921 "enable_zerocopy_send_server": true, 00:23:10.921 "enable_zerocopy_send_client": false, 00:23:10.921 "zerocopy_threshold": 0, 00:23:10.921 "tls_version": 0, 00:23:10.921 "enable_ktls": false 00:23:10.921 } 00:23:10.921 }, 00:23:10.921 { 00:23:10.921 "method": "sock_impl_set_options", 00:23:10.921 "params": { 00:23:10.921 "impl_name": "posix", 00:23:10.921 "recv_buf_size": 2097152, 00:23:10.921 "send_buf_size": 2097152, 00:23:10.921 "enable_recv_pipe": true, 00:23:10.921 "enable_quickack": false, 00:23:10.921 "enable_placement_id": 0, 00:23:10.921 "enable_zerocopy_send_server": true, 00:23:10.921 "enable_zerocopy_send_client": false, 00:23:10.921 "zerocopy_threshold": 0, 00:23:10.921 "tls_version": 0, 00:23:10.921 "enable_ktls": false 00:23:10.921 } 00:23:10.921 } 00:23:10.921 ] 00:23:10.921 }, 00:23:10.921 { 00:23:10.921 "subsystem": "vmd", 00:23:10.921 "config": [] 00:23:10.921 }, 00:23:10.921 { 00:23:10.921 "subsystem": "accel", 00:23:10.921 "config": [ 00:23:10.921 { 00:23:10.921 "method": "accel_set_options", 00:23:10.921 "params": { 00:23:10.921 "small_cache_size": 128, 00:23:10.921 "large_cache_size": 16, 00:23:10.921 "task_count": 2048, 00:23:10.921 "sequence_count": 2048, 00:23:10.921 "buf_count": 2048 00:23:10.921 } 00:23:10.921 } 00:23:10.921 ] 00:23:10.921 }, 00:23:10.921 { 00:23:10.921 "subsystem": "bdev", 00:23:10.921 "config": [ 00:23:10.921 { 00:23:10.921 "method": "bdev_set_options", 00:23:10.921 "params": { 00:23:10.921 "bdev_io_pool_size": 65535, 00:23:10.921 "bdev_io_cache_size": 256, 00:23:10.921 "bdev_auto_examine": true, 00:23:10.921 "iobuf_small_cache_size": 128, 00:23:10.921 "iobuf_large_cache_size": 16 00:23:10.921 } 00:23:10.921 }, 00:23:10.921 { 00:23:10.921 "method": "bdev_raid_set_options", 00:23:10.921 "params": { 00:23:10.921 "process_window_size_kb": 1024, 00:23:10.921 "process_max_bandwidth_mb_sec": 0 00:23:10.921 } 00:23:10.921 }, 00:23:10.921 { 00:23:10.921 "method": "bdev_iscsi_set_options", 00:23:10.921 "params": { 00:23:10.921 "timeout_sec": 30 00:23:10.921 } 00:23:10.921 }, 00:23:10.921 { 00:23:10.921 "method": "bdev_nvme_set_options", 00:23:10.921 "params": { 00:23:10.921 "action_on_timeout": "none", 00:23:10.921 "timeout_us": 0, 00:23:10.921 "timeout_admin_us": 0, 00:23:10.921 "keep_alive_timeout_ms": 10000, 00:23:10.921 "arbitration_burst": 0, 00:23:10.921 "low_priority_weight": 0, 00:23:10.921 "medium_priority_weight": 0, 00:23:10.921 "high_priority_weight": 0, 00:23:10.921 "nvme_adminq_poll_period_us": 10000, 00:23:10.921 "nvme_ioq_poll_period_us": 0, 00:23:10.921 "io_queue_requests": 512, 00:23:10.921 "delay_cmd_submit": true, 00:23:10.921 "transport_retry_count": 4, 00:23:10.921 "bdev_retry_count": 3, 00:23:10.921 "transport_ack_timeout": 0, 00:23:10.921 "ctrlr_loss_timeout_sec": 0, 00:23:10.921 "reconnect_delay_sec": 0, 00:23:10.921 "fast_io_fail_timeout_sec": 0, 00:23:10.921 "disable_auto_failback": false, 00:23:10.921 "generate_uuids": false, 00:23:10.921 "transport_tos": 0, 00:23:10.921 "nvme_error_stat": false, 00:23:10.921 "rdma_srq_size": 0, 00:23:10.921 "io_path_stat": false, 00:23:10.921 "allow_accel_sequence": false, 00:23:10.921 "rdma_max_cq_size": 0, 00:23:10.921 "rdma_cm_event_timeout_ms": 0, 00:23:10.921 "dhchap_digests": [ 00:23:10.921 "sha256", 00:23:10.921 "sha384", 00:23:10.921 "sha512" 00:23:10.921 ], 00:23:10.921 "dhchap_dhgroups": [ 00:23:10.921 "null", 00:23:10.921 "ffdhe2048", 00:23:10.921 01:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:10.921 "ffdhe3072", 00:23:10.921 "ffdhe4096", 00:23:10.921 "ffdhe6144", 00:23:10.921 "ffdhe8192" 00:23:10.921 ] 00:23:10.921 } 00:23:10.921 }, 00:23:10.921 { 00:23:10.921 "method": "bdev_nvme_attach_controller", 00:23:10.921 "params": { 00:23:10.921 "name": "TLSTEST", 00:23:10.921 "trtype": "TCP", 00:23:10.921 "adrfam": "IPv4", 00:23:10.921 "traddr": "10.0.0.2", 00:23:10.921 "trsvcid": "4420", 00:23:10.921 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.921 "prchk_reftag": false, 00:23:10.921 "prchk_guard": false, 00:23:10.921 "ctrlr_loss_timeout_sec": 0, 00:23:10.921 "reconnect_delay_sec": 0, 00:23:10.921 "fast_io_fail_timeout_sec": 0, 00:23:10.921 "psk": "/tmp/tmp.lVO6vyF9ks", 00:23:10.921 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:10.921 "hdgst": false, 00:23:10.921 "ddgst": false 00:23:10.921 } 00:23:10.921 }, 00:23:10.921 { 00:23:10.921 "method": "bdev_nvme_set_hotplug", 00:23:10.921 "params": { 00:23:10.921 "period_us": 100000, 00:23:10.921 "enable": false 00:23:10.921 } 00:23:10.921 }, 00:23:10.921 { 00:23:10.921 "method": "bdev_wait_for_examine" 00:23:10.921 } 00:23:10.921 ] 00:23:10.921 }, 00:23:10.921 { 00:23:10.921 "subsystem": "nbd", 00:23:10.921 "config": [] 00:23:10.921 } 00:23:10.921 ] 00:23:10.921 }' 00:23:10.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:10.922 01:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:10.922 01:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.922 [2024-07-26 01:05:41.159531] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:23:10.922 [2024-07-26 01:05:41.159617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1868109 ] 00:23:10.922 EAL: No free 2048 kB hugepages reported on node 1 00:23:10.922 [2024-07-26 01:05:41.217064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.922 [2024-07-26 01:05:41.303620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:11.180 [2024-07-26 01:05:41.469682] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:11.180 [2024-07-26 01:05:41.469822] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:11.745 01:05:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:11.745 01:05:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:11.745 01:05:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:12.003 Running I/O for 10 seconds... 00:23:21.966 00:23:21.966 Latency(us) 00:23:21.966 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.966 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:21.966 Verification LBA range: start 0x0 length 0x2000 00:23:21.966 TLSTESTn1 : 10.02 3566.87 13.93 0.00 0.00 35822.61 6262.33 57477.50 00:23:21.966 =================================================================================================================== 00:23:21.966 Total : 3566.87 13.93 0.00 0.00 35822.61 6262.33 57477.50 00:23:21.966 0 00:23:21.966 01:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:21.966 01:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 1868109 00:23:21.966 01:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1868109 ']' 00:23:21.966 01:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1868109 00:23:21.966 01:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:21.966 01:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:21.966 01:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1868109 00:23:21.966 01:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:21.966 01:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:21.966 01:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1868109' 00:23:21.966 killing process with pid 1868109 00:23:21.966 01:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1868109 00:23:21.966 Received shutdown signal, test time was about 10.000000 seconds 00:23:21.966 00:23:21.966 Latency(us) 00:23:21.966 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.966 =================================================================================================================== 00:23:21.966 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:21.966 [2024-07-26 01:05:52.379693] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:21.966 01:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1868109 00:23:22.224 01:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 1867961 00:23:22.224 01:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1867961 ']' 00:23:22.224 01:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1867961 00:23:22.224 01:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:22.224 01:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:22.224 01:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1867961 00:23:22.225 01:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:22.225 01:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:22.225 01:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1867961' 00:23:22.225 killing process with pid 1867961 00:23:22.225 01:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1867961 00:23:22.225 [2024-07-26 01:05:52.634761] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:22.225 01:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1867961 00:23:22.482 01:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:23:22.482 01:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:22.482 01:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:22.482 01:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.482 01:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:22.482 01:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1869555 00:23:22.482 01:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1869555 00:23:22.482 01:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1869555 ']' 00:23:22.482 01:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:22.482 01:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:22.482 01:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:22.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:22.482 01:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:22.482 01:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.741 [2024-07-26 01:05:52.916241] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:23:22.741 [2024-07-26 01:05:52.916335] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:22.741 EAL: No free 2048 kB hugepages reported on node 1 00:23:22.741 [2024-07-26 01:05:52.977912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.741 [2024-07-26 01:05:53.060750] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:22.741 [2024-07-26 01:05:53.060802] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:22.741 [2024-07-26 01:05:53.060833] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:22.741 [2024-07-26 01:05:53.060845] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:22.741 [2024-07-26 01:05:53.060854] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:22.741 [2024-07-26 01:05:53.060880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.998 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:22.998 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:22.998 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:22.998 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:22.998 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.998 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:22.998 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.lVO6vyF9ks 00:23:22.998 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.lVO6vyF9ks 00:23:22.998 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:22.998 [2024-07-26 01:05:53.417183] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:23.255 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:23.512 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:23.770 [2024-07-26 01:05:53.970632] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:23.770 [2024-07-26 01:05:53.970889] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:23.770 01:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:24.028 malloc0 00:23:24.028 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:24.285 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lVO6vyF9ks 00:23:24.543 [2024-07-26 01:05:54.756258] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:24.543 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1869718 00:23:24.543 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:24.543 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:24.543 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1869718 /var/tmp/bdevperf.sock 00:23:24.544 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1869718 ']' 00:23:24.544 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:24.544 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:24.544 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:24.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:24.544 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:24.544 01:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.544 [2024-07-26 01:05:54.813595] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:23:24.544 [2024-07-26 01:05:54.813677] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1869718 ] 00:23:24.544 EAL: No free 2048 kB hugepages reported on node 1 00:23:24.544 [2024-07-26 01:05:54.872301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.544 [2024-07-26 01:05:54.960170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.802 01:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:24.802 01:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:24.802 01:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lVO6vyF9ks 00:23:25.060 01:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:25.320 [2024-07-26 01:05:55.535139] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:25.320 nvme0n1 00:23:25.320 01:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:25.320 Running I/O for 1 seconds... 00:23:26.725 00:23:26.725 Latency(us) 00:23:26.725 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.725 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:26.725 Verification LBA range: start 0x0 length 0x2000 00:23:26.726 nvme0n1 : 1.02 3450.92 13.48 0.00 0.00 36704.32 6893.42 39224.51 00:23:26.726 =================================================================================================================== 00:23:26.726 Total : 3450.92 13.48 0.00 0.00 36704.32 6893.42 39224.51 00:23:26.726 0 00:23:26.726 01:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 1869718 00:23:26.726 01:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1869718 ']' 00:23:26.726 01:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1869718 00:23:26.726 01:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:26.726 01:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:26.726 01:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1869718 00:23:26.726 01:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:26.726 01:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:26.726 01:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1869718' 00:23:26.726 killing process with pid 1869718 00:23:26.726 01:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1869718 00:23:26.726 Received shutdown signal, test time was about 1.000000 seconds 00:23:26.726 00:23:26.726 Latency(us) 00:23:26.726 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.726 =================================================================================================================== 00:23:26.726 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:26.726 01:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1869718 00:23:26.726 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 1869555 00:23:26.726 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1869555 ']' 00:23:26.726 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1869555 00:23:26.726 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:26.726 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:26.726 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1869555 00:23:26.726 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:26.726 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:26.726 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1869555' 00:23:26.726 killing process with pid 1869555 00:23:26.726 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1869555 00:23:26.726 [2024-07-26 01:05:57.056323] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:26.726 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1869555 00:23:26.986 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:23:26.986 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:26.986 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:26.986 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.986 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1870089 00:23:26.986 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:26.986 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1870089 00:23:26.986 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1870089 ']' 00:23:26.986 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.986 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:26.986 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.986 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:26.986 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.986 [2024-07-26 01:05:57.355992] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:23:26.986 [2024-07-26 01:05:57.356116] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:26.986 EAL: No free 2048 kB hugepages reported on node 1 00:23:27.246 [2024-07-26 01:05:57.421609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.246 [2024-07-26 01:05:57.508763] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:27.246 [2024-07-26 01:05:57.508827] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:27.246 [2024-07-26 01:05:57.508854] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:27.246 [2024-07-26 01:05:57.508865] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:27.246 [2024-07-26 01:05:57.508883] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:27.246 [2024-07-26 01:05:57.508908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.246 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:27.246 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:27.246 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:27.246 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:27.246 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.246 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:27.246 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:23:27.246 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.246 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.246 [2024-07-26 01:05:57.652648] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:27.505 malloc0 00:23:27.505 [2024-07-26 01:05:57.685480] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:27.505 [2024-07-26 01:05:57.698267] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:27.505 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.505 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=1870138 00:23:27.505 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:27.505 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 1870138 /var/tmp/bdevperf.sock 00:23:27.505 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1870138 ']' 00:23:27.505 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:27.505 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:27.505 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:27.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:27.505 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:27.505 01:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.505 [2024-07-26 01:05:57.767505] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:23:27.505 [2024-07-26 01:05:57.767582] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1870138 ] 00:23:27.505 EAL: No free 2048 kB hugepages reported on node 1 00:23:27.505 [2024-07-26 01:05:57.834308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.505 [2024-07-26 01:05:57.925935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.763 01:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:27.763 01:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:27.763 01:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lVO6vyF9ks 00:23:28.021 01:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:28.280 [2024-07-26 01:05:58.618193] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:28.280 nvme0n1 00:23:28.280 01:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:28.538 Running I/O for 1 seconds... 00:23:29.475 00:23:29.476 Latency(us) 00:23:29.476 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.476 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:29.476 Verification LBA range: start 0x0 length 0x2000 00:23:29.476 nvme0n1 : 1.03 3409.40 13.32 0.00 0.00 37046.54 6602.15 33981.63 00:23:29.476 =================================================================================================================== 00:23:29.476 Total : 3409.40 13.32 0.00 0.00 37046.54 6602.15 33981.63 00:23:29.476 0 00:23:29.476 01:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:23:29.476 01:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.476 01:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.734 01:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.734 01:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:23:29.734 "subsystems": [ 00:23:29.734 { 00:23:29.734 "subsystem": "keyring", 00:23:29.734 "config": [ 00:23:29.734 { 00:23:29.734 "method": "keyring_file_add_key", 00:23:29.734 "params": { 00:23:29.734 "name": "key0", 00:23:29.734 "path": "/tmp/tmp.lVO6vyF9ks" 00:23:29.734 } 00:23:29.734 } 00:23:29.734 ] 00:23:29.734 }, 00:23:29.734 { 00:23:29.734 "subsystem": "iobuf", 00:23:29.734 "config": [ 00:23:29.734 { 00:23:29.734 "method": "iobuf_set_options", 00:23:29.734 "params": { 00:23:29.734 "small_pool_count": 8192, 00:23:29.734 "large_pool_count": 1024, 00:23:29.734 "small_bufsize": 8192, 00:23:29.734 "large_bufsize": 135168 00:23:29.734 } 00:23:29.734 } 00:23:29.734 ] 00:23:29.734 }, 00:23:29.734 { 00:23:29.734 "subsystem": "sock", 00:23:29.734 "config": [ 00:23:29.734 { 00:23:29.734 "method": "sock_set_default_impl", 00:23:29.734 "params": { 00:23:29.734 "impl_name": "posix" 00:23:29.734 } 00:23:29.734 }, 00:23:29.734 { 00:23:29.734 "method": "sock_impl_set_options", 00:23:29.734 "params": { 00:23:29.734 "impl_name": "ssl", 00:23:29.734 "recv_buf_size": 4096, 00:23:29.734 "send_buf_size": 4096, 00:23:29.734 "enable_recv_pipe": true, 00:23:29.734 "enable_quickack": false, 00:23:29.734 "enable_placement_id": 0, 00:23:29.734 "enable_zerocopy_send_server": true, 00:23:29.734 "enable_zerocopy_send_client": false, 00:23:29.734 "zerocopy_threshold": 0, 00:23:29.734 "tls_version": 0, 00:23:29.734 "enable_ktls": false 00:23:29.734 } 00:23:29.734 }, 00:23:29.734 { 00:23:29.734 "method": "sock_impl_set_options", 00:23:29.734 "params": { 00:23:29.734 "impl_name": "posix", 00:23:29.734 "recv_buf_size": 2097152, 00:23:29.734 "send_buf_size": 2097152, 00:23:29.734 "enable_recv_pipe": true, 00:23:29.734 "enable_quickack": false, 00:23:29.734 "enable_placement_id": 0, 00:23:29.734 "enable_zerocopy_send_server": true, 00:23:29.734 "enable_zerocopy_send_client": false, 00:23:29.734 "zerocopy_threshold": 0, 00:23:29.734 "tls_version": 0, 00:23:29.734 "enable_ktls": false 00:23:29.734 } 00:23:29.734 } 00:23:29.734 ] 00:23:29.734 }, 00:23:29.734 { 00:23:29.734 "subsystem": "vmd", 00:23:29.734 "config": [] 00:23:29.734 }, 00:23:29.734 { 00:23:29.734 "subsystem": "accel", 00:23:29.734 "config": [ 00:23:29.734 { 00:23:29.734 "method": "accel_set_options", 00:23:29.734 "params": { 00:23:29.734 "small_cache_size": 128, 00:23:29.734 "large_cache_size": 16, 00:23:29.734 "task_count": 2048, 00:23:29.734 "sequence_count": 2048, 00:23:29.734 "buf_count": 2048 00:23:29.734 } 00:23:29.734 } 00:23:29.734 ] 00:23:29.734 }, 00:23:29.734 { 00:23:29.734 "subsystem": "bdev", 00:23:29.734 "config": [ 00:23:29.734 { 00:23:29.734 "method": "bdev_set_options", 00:23:29.734 "params": { 00:23:29.734 "bdev_io_pool_size": 65535, 00:23:29.734 "bdev_io_cache_size": 256, 00:23:29.734 "bdev_auto_examine": true, 00:23:29.734 "iobuf_small_cache_size": 128, 00:23:29.734 "iobuf_large_cache_size": 16 00:23:29.734 } 00:23:29.734 }, 00:23:29.734 { 00:23:29.734 "method": "bdev_raid_set_options", 00:23:29.734 "params": { 00:23:29.734 "process_window_size_kb": 1024, 00:23:29.735 "process_max_bandwidth_mb_sec": 0 00:23:29.735 } 00:23:29.735 }, 00:23:29.735 { 00:23:29.735 "method": "bdev_iscsi_set_options", 00:23:29.735 "params": { 00:23:29.735 "timeout_sec": 30 00:23:29.735 } 00:23:29.735 }, 00:23:29.735 { 00:23:29.735 "method": "bdev_nvme_set_options", 00:23:29.735 "params": { 00:23:29.735 "action_on_timeout": "none", 00:23:29.735 "timeout_us": 0, 00:23:29.735 "timeout_admin_us": 0, 00:23:29.735 "keep_alive_timeout_ms": 10000, 00:23:29.735 "arbitration_burst": 0, 00:23:29.735 "low_priority_weight": 0, 00:23:29.735 "medium_priority_weight": 0, 00:23:29.735 "high_priority_weight": 0, 00:23:29.735 "nvme_adminq_poll_period_us": 10000, 00:23:29.735 "nvme_ioq_poll_period_us": 0, 00:23:29.735 "io_queue_requests": 0, 00:23:29.735 "delay_cmd_submit": true, 00:23:29.735 "transport_retry_count": 4, 00:23:29.735 "bdev_retry_count": 3, 00:23:29.735 "transport_ack_timeout": 0, 00:23:29.735 "ctrlr_loss_timeout_sec": 0, 00:23:29.735 "reconnect_delay_sec": 0, 00:23:29.735 "fast_io_fail_timeout_sec": 0, 00:23:29.735 "disable_auto_failback": false, 00:23:29.735 "generate_uuids": false, 00:23:29.735 "transport_tos": 0, 00:23:29.735 "nvme_error_stat": false, 00:23:29.735 "rdma_srq_size": 0, 00:23:29.735 "io_path_stat": false, 00:23:29.735 "allow_accel_sequence": false, 00:23:29.735 "rdma_max_cq_size": 0, 00:23:29.735 "rdma_cm_event_timeout_ms": 0, 00:23:29.735 "dhchap_digests": [ 00:23:29.735 "sha256", 00:23:29.735 "sha384", 00:23:29.735 "sha512" 00:23:29.735 ], 00:23:29.735 "dhchap_dhgroups": [ 00:23:29.735 "null", 00:23:29.735 "ffdhe2048", 00:23:29.735 "ffdhe3072", 00:23:29.735 "ffdhe4096", 00:23:29.735 "ffdhe6144", 00:23:29.735 "ffdhe8192" 00:23:29.735 ] 00:23:29.735 } 00:23:29.735 }, 00:23:29.735 { 00:23:29.735 "method": "bdev_nvme_set_hotplug", 00:23:29.735 "params": { 00:23:29.735 "period_us": 100000, 00:23:29.735 "enable": false 00:23:29.735 } 00:23:29.735 }, 00:23:29.735 { 00:23:29.735 "method": "bdev_malloc_create", 00:23:29.735 "params": { 00:23:29.735 "name": "malloc0", 00:23:29.735 "num_blocks": 8192, 00:23:29.735 "block_size": 4096, 00:23:29.735 "physical_block_size": 4096, 00:23:29.735 "uuid": "3c9e695c-eecc-40a4-8ca3-4773bcafaca1", 00:23:29.735 "optimal_io_boundary": 0, 00:23:29.735 "md_size": 0, 00:23:29.735 "dif_type": 0, 00:23:29.735 "dif_is_head_of_md": false, 00:23:29.735 "dif_pi_format": 0 00:23:29.735 } 00:23:29.735 }, 00:23:29.735 { 00:23:29.735 "method": "bdev_wait_for_examine" 00:23:29.735 } 00:23:29.735 ] 00:23:29.735 }, 00:23:29.735 { 00:23:29.735 "subsystem": "nbd", 00:23:29.735 "config": [] 00:23:29.735 }, 00:23:29.735 { 00:23:29.735 "subsystem": "scheduler", 00:23:29.735 "config": [ 00:23:29.735 { 00:23:29.735 "method": "framework_set_scheduler", 00:23:29.735 "params": { 00:23:29.735 "name": "static" 00:23:29.735 } 00:23:29.735 } 00:23:29.735 ] 00:23:29.735 }, 00:23:29.735 { 00:23:29.735 "subsystem": "nvmf", 00:23:29.735 "config": [ 00:23:29.735 { 00:23:29.735 "method": "nvmf_set_config", 00:23:29.735 "params": { 00:23:29.735 "discovery_filter": "match_any", 00:23:29.735 "admin_cmd_passthru": { 00:23:29.735 "identify_ctrlr": false 00:23:29.735 } 00:23:29.735 } 00:23:29.735 }, 00:23:29.735 { 00:23:29.735 "method": "nvmf_set_max_subsystems", 00:23:29.735 "params": { 00:23:29.735 "max_subsystems": 1024 00:23:29.735 } 00:23:29.735 }, 00:23:29.735 { 00:23:29.735 "method": "nvmf_set_crdt", 00:23:29.735 "params": { 00:23:29.735 "crdt1": 0, 00:23:29.735 "crdt2": 0, 00:23:29.735 "crdt3": 0 00:23:29.735 } 00:23:29.735 }, 00:23:29.735 { 00:23:29.735 "method": "nvmf_create_transport", 00:23:29.735 "params": { 00:23:29.735 "trtype": "TCP", 00:23:29.735 "max_queue_depth": 128, 00:23:29.735 "max_io_qpairs_per_ctrlr": 127, 00:23:29.735 "in_capsule_data_size": 4096, 00:23:29.735 "max_io_size": 131072, 00:23:29.735 "io_unit_size": 131072, 00:23:29.735 "max_aq_depth": 128, 00:23:29.735 "num_shared_buffers": 511, 00:23:29.735 "buf_cache_size": 4294967295, 00:23:29.735 "dif_insert_or_strip": false, 00:23:29.735 "zcopy": false, 00:23:29.735 "c2h_success": false, 00:23:29.735 "sock_priority": 0, 00:23:29.735 "abort_timeout_sec": 1, 00:23:29.735 "ack_timeout": 0, 00:23:29.735 "data_wr_pool_size": 0 00:23:29.735 } 00:23:29.735 }, 00:23:29.735 { 00:23:29.735 "method": "nvmf_create_subsystem", 00:23:29.735 "params": { 00:23:29.735 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.735 "allow_any_host": false, 00:23:29.735 "serial_number": "00000000000000000000", 00:23:29.735 "model_number": "SPDK bdev Controller", 00:23:29.735 "max_namespaces": 32, 00:23:29.735 "min_cntlid": 1, 00:23:29.735 "max_cntlid": 65519, 00:23:29.735 "ana_reporting": false 00:23:29.735 } 00:23:29.735 }, 00:23:29.735 { 00:23:29.735 "method": "nvmf_subsystem_add_host", 00:23:29.735 "params": { 00:23:29.735 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.735 "host": "nqn.2016-06.io.spdk:host1", 00:23:29.735 "psk": "key0" 00:23:29.735 } 00:23:29.735 }, 00:23:29.735 { 00:23:29.735 "method": "nvmf_subsystem_add_ns", 00:23:29.735 "params": { 00:23:29.735 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.735 "namespace": { 00:23:29.735 "nsid": 1, 00:23:29.735 "bdev_name": "malloc0", 00:23:29.735 "nguid": "3C9E695CEECC40A48CA34773BCAFACA1", 00:23:29.735 "uuid": "3c9e695c-eecc-40a4-8ca3-4773bcafaca1", 00:23:29.735 "no_auto_visible": false 00:23:29.735 } 00:23:29.735 } 00:23:29.735 }, 00:23:29.735 { 00:23:29.735 "method": "nvmf_subsystem_add_listener", 00:23:29.735 "params": { 00:23:29.735 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.735 "listen_address": { 00:23:29.735 "trtype": "TCP", 00:23:29.735 "adrfam": "IPv4", 00:23:29.735 "traddr": "10.0.0.2", 00:23:29.735 "trsvcid": "4420" 00:23:29.735 }, 00:23:29.735 "secure_channel": false, 00:23:29.735 "sock_impl": "ssl" 00:23:29.735 } 00:23:29.735 } 00:23:29.735 ] 00:23:29.735 } 00:23:29.735 ] 00:23:29.735 }' 00:23:29.735 01:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:29.994 01:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:23:29.994 "subsystems": [ 00:23:29.994 { 00:23:29.994 "subsystem": "keyring", 00:23:29.994 "config": [ 00:23:29.994 { 00:23:29.994 "method": "keyring_file_add_key", 00:23:29.994 "params": { 00:23:29.994 "name": "key0", 00:23:29.994 "path": "/tmp/tmp.lVO6vyF9ks" 00:23:29.994 } 00:23:29.994 } 00:23:29.994 ] 00:23:29.994 }, 00:23:29.994 { 00:23:29.994 "subsystem": "iobuf", 00:23:29.994 "config": [ 00:23:29.994 { 00:23:29.994 "method": "iobuf_set_options", 00:23:29.994 "params": { 00:23:29.994 "small_pool_count": 8192, 00:23:29.994 "large_pool_count": 1024, 00:23:29.994 "small_bufsize": 8192, 00:23:29.994 "large_bufsize": 135168 00:23:29.994 } 00:23:29.994 } 00:23:29.994 ] 00:23:29.994 }, 00:23:29.994 { 00:23:29.994 "subsystem": "sock", 00:23:29.994 "config": [ 00:23:29.994 { 00:23:29.994 "method": "sock_set_default_impl", 00:23:29.994 "params": { 00:23:29.994 "impl_name": "posix" 00:23:29.994 } 00:23:29.994 }, 00:23:29.994 { 00:23:29.994 "method": "sock_impl_set_options", 00:23:29.994 "params": { 00:23:29.994 "impl_name": "ssl", 00:23:29.994 "recv_buf_size": 4096, 00:23:29.994 "send_buf_size": 4096, 00:23:29.994 "enable_recv_pipe": true, 00:23:29.994 "enable_quickack": false, 00:23:29.994 "enable_placement_id": 0, 00:23:29.994 "enable_zerocopy_send_server": true, 00:23:29.994 "enable_zerocopy_send_client": false, 00:23:29.994 "zerocopy_threshold": 0, 00:23:29.994 "tls_version": 0, 00:23:29.994 "enable_ktls": false 00:23:29.994 } 00:23:29.994 }, 00:23:29.994 { 00:23:29.994 "method": "sock_impl_set_options", 00:23:29.994 "params": { 00:23:29.994 "impl_name": "posix", 00:23:29.994 "recv_buf_size": 2097152, 00:23:29.994 "send_buf_size": 2097152, 00:23:29.994 "enable_recv_pipe": true, 00:23:29.994 "enable_quickack": false, 00:23:29.994 "enable_placement_id": 0, 00:23:29.994 "enable_zerocopy_send_server": true, 00:23:29.994 "enable_zerocopy_send_client": false, 00:23:29.994 "zerocopy_threshold": 0, 00:23:29.994 "tls_version": 0, 00:23:29.994 "enable_ktls": false 00:23:29.994 } 00:23:29.994 } 00:23:29.994 ] 00:23:29.994 }, 00:23:29.994 { 00:23:29.994 "subsystem": "vmd", 00:23:29.994 "config": [] 00:23:29.994 }, 00:23:29.994 { 00:23:29.994 "subsystem": "accel", 00:23:29.994 "config": [ 00:23:29.994 { 00:23:29.994 "method": "accel_set_options", 00:23:29.994 "params": { 00:23:29.994 "small_cache_size": 128, 00:23:29.994 "large_cache_size": 16, 00:23:29.994 "task_count": 2048, 00:23:29.994 "sequence_count": 2048, 00:23:29.994 "buf_count": 2048 00:23:29.994 } 00:23:29.994 } 00:23:29.994 ] 00:23:29.994 }, 00:23:29.994 { 00:23:29.994 "subsystem": "bdev", 00:23:29.994 "config": [ 00:23:29.994 { 00:23:29.994 "method": "bdev_set_options", 00:23:29.994 "params": { 00:23:29.994 "bdev_io_pool_size": 65535, 00:23:29.994 "bdev_io_cache_size": 256, 00:23:29.994 "bdev_auto_examine": true, 00:23:29.994 "iobuf_small_cache_size": 128, 00:23:29.994 "iobuf_large_cache_size": 16 00:23:29.994 } 00:23:29.994 }, 00:23:29.994 { 00:23:29.994 "method": "bdev_raid_set_options", 00:23:29.994 "params": { 00:23:29.994 "process_window_size_kb": 1024, 00:23:29.994 "process_max_bandwidth_mb_sec": 0 00:23:29.994 } 00:23:29.994 }, 00:23:29.994 { 00:23:29.994 "method": "bdev_iscsi_set_options", 00:23:29.994 "params": { 00:23:29.994 "timeout_sec": 30 00:23:29.994 } 00:23:29.994 }, 00:23:29.994 { 00:23:29.994 "method": "bdev_nvme_set_options", 00:23:29.994 "params": { 00:23:29.994 "action_on_timeout": "none", 00:23:29.994 "timeout_us": 0, 00:23:29.994 "timeout_admin_us": 0, 00:23:29.994 "keep_alive_timeout_ms": 10000, 00:23:29.994 "arbitration_burst": 0, 00:23:29.994 "low_priority_weight": 0, 00:23:29.994 "medium_priority_weight": 0, 00:23:29.994 "high_priority_weight": 0, 00:23:29.994 "nvme_adminq_poll_period_us": 10000, 00:23:29.994 "nvme_ioq_poll_period_us": 0, 00:23:29.994 "io_queue_requests": 512, 00:23:29.994 "delay_cmd_submit": true, 00:23:29.994 "transport_retry_count": 4, 00:23:29.994 "bdev_retry_count": 3, 00:23:29.994 "transport_ack_timeout": 0, 00:23:29.994 "ctrlr_loss_timeout_sec": 0, 00:23:29.994 "reconnect_delay_sec": 0, 00:23:29.994 "fast_io_fail_timeout_sec": 0, 00:23:29.994 "disable_auto_failback": false, 00:23:29.994 "generate_uuids": false, 00:23:29.994 "transport_tos": 0, 00:23:29.994 "nvme_error_stat": false, 00:23:29.994 "rdma_srq_size": 0, 00:23:29.994 "io_path_stat": false, 00:23:29.994 "allow_accel_sequence": false, 00:23:29.994 "rdma_max_cq_size": 0, 00:23:29.994 "rdma_cm_event_timeout_ms": 0, 00:23:29.994 "dhchap_digests": [ 00:23:29.994 "sha256", 00:23:29.994 "sha384", 00:23:29.994 "sha512" 00:23:29.994 ], 00:23:29.994 "dhchap_dhgroups": [ 00:23:29.994 "null", 00:23:29.994 "ffdhe2048", 00:23:29.994 "ffdhe3072", 00:23:29.994 "ffdhe4096", 00:23:29.994 "ffdhe6144", 00:23:29.994 "ffdhe8192" 00:23:29.994 ] 00:23:29.994 } 00:23:29.994 }, 00:23:29.994 { 00:23:29.994 "method": "bdev_nvme_attach_controller", 00:23:29.994 "params": { 00:23:29.994 "name": "nvme0", 00:23:29.994 "trtype": "TCP", 00:23:29.994 "adrfam": "IPv4", 00:23:29.994 "traddr": "10.0.0.2", 00:23:29.994 "trsvcid": "4420", 00:23:29.994 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.994 "prchk_reftag": false, 00:23:29.994 "prchk_guard": false, 00:23:29.994 "ctrlr_loss_timeout_sec": 0, 00:23:29.994 "reconnect_delay_sec": 0, 00:23:29.994 "fast_io_fail_timeout_sec": 0, 00:23:29.995 "psk": "key0", 00:23:29.995 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:29.995 "hdgst": false, 00:23:29.995 "ddgst": false 00:23:29.995 } 00:23:29.995 }, 00:23:29.995 { 00:23:29.995 "method": "bdev_nvme_set_hotplug", 00:23:29.995 "params": { 00:23:29.995 "period_us": 100000, 00:23:29.995 "enable": false 00:23:29.995 } 00:23:29.995 }, 00:23:29.995 { 00:23:29.995 "method": "bdev_enable_histogram", 00:23:29.995 "params": { 00:23:29.995 "name": "nvme0n1", 00:23:29.995 "enable": true 00:23:29.995 } 00:23:29.995 }, 00:23:29.995 { 00:23:29.995 "method": "bdev_wait_for_examine" 00:23:29.995 } 00:23:29.995 ] 00:23:29.995 }, 00:23:29.995 { 00:23:29.995 "subsystem": "nbd", 00:23:29.995 "config": [] 00:23:29.995 } 00:23:29.995 ] 00:23:29.995 }' 00:23:29.995 01:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 1870138 00:23:29.995 01:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1870138 ']' 00:23:29.995 01:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1870138 00:23:29.995 01:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:29.995 01:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:29.995 01:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1870138 00:23:29.995 01:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:29.995 01:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:29.995 01:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1870138' 00:23:29.995 killing process with pid 1870138 00:23:29.995 01:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1870138 00:23:29.995 Received shutdown signal, test time was about 1.000000 seconds 00:23:29.995 00:23:29.995 Latency(us) 00:23:29.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.995 =================================================================================================================== 00:23:29.995 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:29.995 01:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1870138 00:23:30.254 01:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 1870089 00:23:30.254 01:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1870089 ']' 00:23:30.254 01:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1870089 00:23:30.254 01:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:30.254 01:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:30.254 01:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1870089 00:23:30.254 01:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:30.254 01:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:30.254 01:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1870089' 00:23:30.254 killing process with pid 1870089 00:23:30.254 01:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1870089 00:23:30.254 01:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1870089 00:23:30.513 01:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:23:30.513 01:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:30.513 01:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:23:30.513 "subsystems": [ 00:23:30.513 { 00:23:30.513 "subsystem": "keyring", 00:23:30.513 "config": [ 00:23:30.513 { 00:23:30.513 "method": "keyring_file_add_key", 00:23:30.513 "params": { 00:23:30.513 "name": "key0", 00:23:30.513 "path": "/tmp/tmp.lVO6vyF9ks" 00:23:30.513 } 00:23:30.513 } 00:23:30.513 ] 00:23:30.513 }, 00:23:30.513 { 00:23:30.513 "subsystem": "iobuf", 00:23:30.513 "config": [ 00:23:30.513 { 00:23:30.513 "method": "iobuf_set_options", 00:23:30.513 "params": { 00:23:30.513 "small_pool_count": 8192, 00:23:30.513 "large_pool_count": 1024, 00:23:30.513 "small_bufsize": 8192, 00:23:30.513 "large_bufsize": 135168 00:23:30.513 } 00:23:30.513 } 00:23:30.513 ] 00:23:30.513 }, 00:23:30.513 { 00:23:30.513 "subsystem": "sock", 00:23:30.513 "config": [ 00:23:30.513 { 00:23:30.513 "method": "sock_set_default_impl", 00:23:30.513 "params": { 00:23:30.513 "impl_name": "posix" 00:23:30.513 } 00:23:30.513 }, 00:23:30.513 { 00:23:30.513 "method": "sock_impl_set_options", 00:23:30.513 "params": { 00:23:30.513 "impl_name": "ssl", 00:23:30.513 "recv_buf_size": 4096, 00:23:30.513 "send_buf_size": 4096, 00:23:30.513 "enable_recv_pipe": true, 00:23:30.513 "enable_quickack": false, 00:23:30.513 "enable_placement_id": 0, 00:23:30.513 "enable_zerocopy_send_server": true, 00:23:30.513 "enable_zerocopy_send_client": false, 00:23:30.513 "zerocopy_threshold": 0, 00:23:30.513 "tls_version": 0, 00:23:30.513 "enable_ktls": false 00:23:30.513 } 00:23:30.513 }, 00:23:30.513 { 00:23:30.513 "method": "sock_impl_set_options", 00:23:30.513 "params": { 00:23:30.513 "impl_name": "posix", 00:23:30.513 "recv_buf_size": 2097152, 00:23:30.513 "send_buf_size": 2097152, 00:23:30.513 "enable_recv_pipe": true, 00:23:30.513 "enable_quickack": false, 00:23:30.513 "enable_placement_id": 0, 00:23:30.513 "enable_zerocopy_send_server": true, 00:23:30.513 "enable_zerocopy_send_client": false, 00:23:30.513 "zerocopy_threshold": 0, 00:23:30.513 "tls_version": 0, 00:23:30.513 "enable_ktls": false 00:23:30.513 } 00:23:30.513 } 00:23:30.513 ] 00:23:30.513 }, 00:23:30.513 { 00:23:30.513 "subsystem": "vmd", 00:23:30.513 "config": [] 00:23:30.513 }, 00:23:30.513 { 00:23:30.513 "subsystem": "accel", 00:23:30.513 "config": [ 00:23:30.513 { 00:23:30.513 "method": "accel_set_options", 00:23:30.513 "params": { 00:23:30.513 "small_cache_size": 128, 00:23:30.513 "large_cache_size": 16, 00:23:30.513 "task_count": 2048, 00:23:30.513 "sequence_count": 2048, 00:23:30.513 "buf_count": 2048 00:23:30.513 } 00:23:30.513 } 00:23:30.513 ] 00:23:30.513 }, 00:23:30.513 { 00:23:30.513 "subsystem": "bdev", 00:23:30.513 "config": [ 00:23:30.513 { 00:23:30.513 "method": "bdev_set_options", 00:23:30.513 "params": { 00:23:30.513 "bdev_io_pool_size": 65535, 00:23:30.513 "bdev_io_cache_size": 256, 00:23:30.513 "bdev_auto_examine": true, 00:23:30.513 "iobuf_small_cache_size": 128, 00:23:30.513 "iobuf_large_cache_size": 16 00:23:30.513 } 00:23:30.513 }, 00:23:30.513 { 00:23:30.513 "method": "bdev_raid_set_options", 00:23:30.513 "params": { 00:23:30.513 "process_window_size_kb": 1024, 00:23:30.513 "process_max_bandwidth_mb_sec": 0 00:23:30.513 } 00:23:30.513 }, 00:23:30.513 { 00:23:30.513 "method": "bdev_iscsi_set_options", 00:23:30.513 "params": { 00:23:30.513 "timeout_sec": 30 00:23:30.513 } 00:23:30.513 }, 00:23:30.513 { 00:23:30.513 "method": "bdev_nvme_set_options", 00:23:30.513 "params": { 00:23:30.513 "action_on_timeout": "none", 00:23:30.513 "timeout_us": 0, 00:23:30.513 "timeout_admin_us": 0, 00:23:30.513 "keep_alive_timeout_ms": 10000, 00:23:30.513 "arbitration_burst": 0, 00:23:30.513 "low_priority_weight": 0, 00:23:30.513 "medium_priority_weight": 0, 00:23:30.513 "high_priority_weight": 0, 00:23:30.513 "nvme_adminq_poll_period_us": 10000, 00:23:30.513 "nvme_ioq_poll_period_us": 0, 00:23:30.513 "io_queue_requests": 0, 00:23:30.513 "delay_cmd_submit": true, 00:23:30.513 "transport_retry_count": 4, 00:23:30.513 "bdev_retry_count": 3, 00:23:30.513 "transport_ack_timeout": 0, 00:23:30.513 "ctrlr_loss_timeout_sec": 0, 00:23:30.513 "reconnect_delay_sec": 0, 00:23:30.513 "fast_io_fail_timeout_sec": 0, 00:23:30.513 "disable_auto_failback": false, 00:23:30.513 "generate_uuids": false, 00:23:30.513 "transport_tos": 0, 00:23:30.513 "nvme_error_stat": false, 00:23:30.513 "rdma_srq_size": 0, 00:23:30.513 "io_path_stat": false, 00:23:30.513 "allow_accel_sequence": false, 00:23:30.513 "rdma_max_cq_size": 0, 00:23:30.513 "rdma_cm_event_timeout_ms": 0, 00:23:30.513 "dhchap_digests": [ 00:23:30.513 "sha256", 00:23:30.513 "sha384", 00:23:30.513 "sha512" 00:23:30.513 ], 00:23:30.513 "dhchap_dhgroups": [ 00:23:30.513 "null", 00:23:30.513 "ffdhe2048", 00:23:30.513 "ffdhe3072", 00:23:30.513 "ffdhe4096", 00:23:30.513 "ffdhe6144", 00:23:30.513 "ffdhe8192" 00:23:30.513 ] 00:23:30.513 } 00:23:30.513 }, 00:23:30.513 { 00:23:30.513 "method": "bdev_nvme_set_hotplug", 00:23:30.513 "params": { 00:23:30.513 "period_us": 100000, 00:23:30.513 "enable": false 00:23:30.513 } 00:23:30.513 }, 00:23:30.513 { 00:23:30.513 "method": "bdev_malloc_create", 00:23:30.513 "params": { 00:23:30.513 "name": "malloc0", 00:23:30.513 "num_blocks": 8192, 00:23:30.513 "block_size": 4096, 00:23:30.513 "physical_block_size": 4096, 00:23:30.513 "uuid": "3c9e695c-eecc-40a4-8ca3-4773bcafaca1", 00:23:30.513 "optimal_io_boundary": 0, 00:23:30.513 "md_size": 0, 00:23:30.513 "dif_type": 0, 00:23:30.513 "dif_is_head_of_md": false, 00:23:30.513 "dif_pi_format": 0 00:23:30.513 } 00:23:30.513 }, 00:23:30.513 { 00:23:30.513 "method": "bdev_wait_for_examine" 00:23:30.513 } 00:23:30.513 ] 00:23:30.513 }, 00:23:30.513 { 00:23:30.513 "subsystem": "nbd", 00:23:30.513 "config": [] 00:23:30.513 }, 00:23:30.513 { 00:23:30.514 "subsystem": "scheduler", 00:23:30.514 "config": [ 00:23:30.514 { 00:23:30.514 "method": "framework_set_scheduler", 00:23:30.514 "params": { 00:23:30.514 "name": "static" 00:23:30.514 } 00:23:30.514 } 00:23:30.514 ] 00:23:30.514 }, 00:23:30.514 { 00:23:30.514 "subsystem": "nvmf", 00:23:30.514 "config": [ 00:23:30.514 { 00:23:30.514 "method": "nvmf_set_config", 00:23:30.514 "params": { 00:23:30.514 "discovery_filter": "match_any", 00:23:30.514 "admin_cmd_passthru": { 00:23:30.514 "identify_ctrlr": false 00:23:30.514 } 00:23:30.514 } 00:23:30.514 }, 00:23:30.514 { 00:23:30.514 "method": "nvmf_set_max_subsystems", 00:23:30.514 "params": { 00:23:30.514 "max_subsystems": 1024 00:23:30.514 } 00:23:30.514 }, 00:23:30.514 { 00:23:30.514 "method": "nvmf_set_crdt", 00:23:30.514 "params": { 00:23:30.514 "crdt1": 0, 00:23:30.514 "crdt2": 0, 00:23:30.514 "crdt3": 0 00:23:30.514 } 00:23:30.514 }, 00:23:30.514 { 00:23:30.514 "method": "nvmf_create_transport", 00:23:30.514 "params": { 00:23:30.514 "trtype": "TCP", 00:23:30.514 "max_queue_depth": 128, 00:23:30.514 "max_io_qpairs_per_ctrlr": 127, 00:23:30.514 "in_capsule_data_size": 4096, 00:23:30.514 "max_io_size": 131072, 00:23:30.514 "io_unit_size": 131072, 00:23:30.514 "max_aq_depth": 128, 00:23:30.514 "num_shared_buffers": 511, 00:23:30.514 "buf_cache_size": 4294967295, 00:23:30.514 "dif_insert_or_strip": false, 00:23:30.514 "zcopy": false, 00:23:30.514 "c2h_success": false, 00:23:30.514 "sock_priority": 0, 00:23:30.514 "abort_timeout_sec": 1, 00:23:30.514 "ack_timeout": 0, 00:23:30.514 "data_wr_pool_size": 0 00:23:30.514 } 00:23:30.514 }, 00:23:30.514 { 00:23:30.514 "method": "nvmf_create_subsystem", 00:23:30.514 "params": { 00:23:30.514 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.514 "allow_any_host": false, 00:23:30.514 "serial_number": "00000000000000000000", 00:23:30.514 "model_number": "SPDK bdev Controller", 00:23:30.514 "max_namespaces": 32, 00:23:30.514 "min_cntlid": 1, 00:23:30.514 "max_cntlid": 65519, 00:23:30.514 "ana_reporting": false 00:23:30.514 } 00:23:30.514 }, 00:23:30.514 { 00:23:30.514 "method": "nvmf_subsystem_add_host", 00:23:30.514 "params": { 00:23:30.514 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.514 "host": "nqn.2016-06.io.spdk:host1", 00:23:30.514 "psk": "key0" 00:23:30.514 } 00:23:30.514 }, 00:23:30.514 { 00:23:30.514 "method": "nvmf_subsystem_add_ns", 00:23:30.514 "params": { 00:23:30.514 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.514 "namespace": { 00:23:30.514 "nsid": 1, 00:23:30.514 "bdev_name": "malloc0", 00:23:30.514 "nguid": "3C9E695CEECC40A48CA34773BCAFACA1", 00:23:30.514 "uuid": "3c9e695c-eecc-40a4-8ca3-4773bcafaca1", 00:23:30.514 "no_auto_visible": false 00:23:30.514 } 00:23:30.514 } 00:23:30.514 }, 00:23:30.514 { 00:23:30.514 "method": "nvmf_subsystem_add_listener", 00:23:30.514 "params": { 00:23:30.514 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.514 "listen_address": { 00:23:30.514 "trtype": "TCP", 00:23:30.514 "adrfam": "IPv4", 00:23:30.514 "traddr": "10.0.0.2", 00:23:30.514 "trsvcid": "4420" 00:23:30.514 }, 00:23:30.514 "secure_channel": false, 00:23:30.514 "sock_impl": "ssl" 00:23:30.514 } 00:23:30.514 } 00:23:30.514 ] 00:23:30.514 } 00:23:30.514 ] 00:23:30.514 }' 00:23:30.514 01:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:30.514 01:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.514 01:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:30.514 01:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1870602 00:23:30.514 01:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1870602 00:23:30.514 01:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1870602 ']' 00:23:30.514 01:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.514 01:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:30.514 01:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.514 01:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:30.514 01:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.514 [2024-07-26 01:06:00.864267] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:23:30.514 [2024-07-26 01:06:00.864365] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.514 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.514 [2024-07-26 01:06:00.934263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.773 [2024-07-26 01:06:01.028284] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.773 [2024-07-26 01:06:01.028355] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.773 [2024-07-26 01:06:01.028380] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:30.773 [2024-07-26 01:06:01.028392] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:30.773 [2024-07-26 01:06:01.028403] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.773 [2024-07-26 01:06:01.028473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.032 [2024-07-26 01:06:01.269478] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:31.032 [2024-07-26 01:06:01.313804] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:31.032 [2024-07-26 01:06:01.314073] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:31.598 01:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:31.599 01:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:31.599 01:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:31.599 01:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:31.599 01:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.599 01:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:31.599 01:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=1870811 00:23:31.599 01:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 1870811 /var/tmp/bdevperf.sock 00:23:31.599 01:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1870811 ']' 00:23:31.599 01:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:31.599 01:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:31.599 01:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:31.599 01:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:31.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:31.599 01:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:31.599 01:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:23:31.599 "subsystems": [ 00:23:31.599 { 00:23:31.599 "subsystem": "keyring", 00:23:31.599 "config": [ 00:23:31.599 { 00:23:31.599 "method": "keyring_file_add_key", 00:23:31.599 "params": { 00:23:31.599 "name": "key0", 00:23:31.599 "path": "/tmp/tmp.lVO6vyF9ks" 00:23:31.599 } 00:23:31.599 } 00:23:31.599 ] 00:23:31.599 }, 00:23:31.599 { 00:23:31.599 "subsystem": "iobuf", 00:23:31.599 "config": [ 00:23:31.599 { 00:23:31.599 "method": "iobuf_set_options", 00:23:31.599 "params": { 00:23:31.599 "small_pool_count": 8192, 00:23:31.599 "large_pool_count": 1024, 00:23:31.599 "small_bufsize": 8192, 00:23:31.599 "large_bufsize": 135168 00:23:31.599 } 00:23:31.599 } 00:23:31.599 ] 00:23:31.599 }, 00:23:31.599 { 00:23:31.599 "subsystem": "sock", 00:23:31.599 "config": [ 00:23:31.599 { 00:23:31.599 "method": "sock_set_default_impl", 00:23:31.599 "params": { 00:23:31.599 "impl_name": "posix" 00:23:31.599 } 00:23:31.599 }, 00:23:31.599 { 00:23:31.599 "method": "sock_impl_set_options", 00:23:31.599 "params": { 00:23:31.599 "impl_name": "ssl", 00:23:31.599 "recv_buf_size": 4096, 00:23:31.599 "send_buf_size": 4096, 00:23:31.599 "enable_recv_pipe": true, 00:23:31.599 "enable_quickack": false, 00:23:31.599 "enable_placement_id": 0, 00:23:31.599 "enable_zerocopy_send_server": true, 00:23:31.599 "enable_zerocopy_send_client": false, 00:23:31.599 "zerocopy_threshold": 0, 00:23:31.599 "tls_version": 0, 00:23:31.599 "enable_ktls": false 00:23:31.599 } 00:23:31.599 }, 00:23:31.599 { 00:23:31.599 "method": "sock_impl_set_options", 00:23:31.599 "params": { 00:23:31.599 "impl_name": "posix", 00:23:31.599 "recv_buf_size": 2097152, 00:23:31.599 "send_buf_size": 2097152, 00:23:31.599 "enable_recv_pipe": true, 00:23:31.599 "enable_quickack": false, 00:23:31.599 "enable_placement_id": 0, 00:23:31.599 "enable_zerocopy_send_server": true, 00:23:31.599 "enable_zerocopy_send_client": false, 00:23:31.599 "zerocopy_threshold": 0, 00:23:31.599 "tls_version": 0, 00:23:31.599 "enable_ktls": false 00:23:31.599 } 00:23:31.599 } 00:23:31.599 ] 00:23:31.599 }, 00:23:31.599 { 00:23:31.599 "subsystem": "vmd", 00:23:31.599 "config": [] 00:23:31.599 }, 00:23:31.599 { 00:23:31.599 "subsystem": "accel", 00:23:31.599 "config": [ 00:23:31.599 { 00:23:31.599 "method": "accel_set_options", 00:23:31.599 "params": { 00:23:31.599 "small_cache_size": 128, 00:23:31.599 "large_cache_size": 16, 00:23:31.599 "task_count": 2048, 00:23:31.599 "sequence_count": 2048, 00:23:31.599 "buf_count": 2048 00:23:31.599 } 00:23:31.599 } 00:23:31.599 ] 00:23:31.599 }, 00:23:31.599 { 00:23:31.599 "subsystem": "bdev", 00:23:31.599 "config": [ 00:23:31.599 { 00:23:31.599 "method": "bdev_set_options", 00:23:31.599 "params": { 00:23:31.599 "bdev_io_pool_size": 65535, 00:23:31.599 "bdev_io_cache_size": 256, 00:23:31.599 "bdev_auto_examine": true, 00:23:31.599 "iobuf_small_cache_size": 128, 00:23:31.599 "iobuf_large_cache_size": 16 00:23:31.599 } 00:23:31.599 }, 00:23:31.599 { 00:23:31.599 "method": "bdev_raid_set_options", 00:23:31.599 "params": { 00:23:31.599 "process_window_size_kb": 1024, 00:23:31.599 "process_max_bandwidth_mb_sec": 0 00:23:31.599 } 00:23:31.599 }, 00:23:31.599 { 00:23:31.599 "method": "bdev_iscsi_set_options", 00:23:31.599 "params": { 00:23:31.599 "timeout_sec": 30 00:23:31.599 } 00:23:31.599 }, 00:23:31.599 { 00:23:31.599 "method": "bdev_nvme_set_options", 00:23:31.599 "params": { 00:23:31.599 "action_on_timeout": "none", 00:23:31.599 "timeout_us": 0, 00:23:31.599 "timeout_admin_us": 0, 00:23:31.599 "keep_alive_timeout_ms": 10000, 00:23:31.599 "arbitration_burst": 0, 00:23:31.599 "low_priority_weight": 0, 00:23:31.599 "medium_priority_weight": 0, 00:23:31.599 "high_priority_weight": 0, 00:23:31.599 "nvme_adminq_poll_period_us": 10000, 00:23:31.599 "nvme_ioq_poll_period_us": 0, 00:23:31.599 "io_queue_requests": 512, 00:23:31.599 "delay_cmd_submit": true, 00:23:31.599 "transport_retry_count": 4, 00:23:31.599 "bdev_retry_count": 3, 00:23:31.599 "transport_ack_timeout": 0, 00:23:31.599 "ctrlr_loss_timeout_sec": 0, 00:23:31.599 "reconnect_delay_sec": 0, 00:23:31.599 "fast_io_fail_timeout_sec": 0, 00:23:31.599 "disable_auto_failback": false, 00:23:31.599 "generate_uuids": false, 00:23:31.599 "transport_tos": 0, 00:23:31.599 "nvme_error_stat": false, 00:23:31.599 "rdma_srq_size": 0, 00:23:31.599 "io_path_stat": false, 00:23:31.599 "allow_accel_sequence": false, 00:23:31.599 "rdma_max_cq_size": 0, 00:23:31.599 "rdma_cm_event_timeout_ms": 0, 00:23:31.599 "dhchap_digests": [ 00:23:31.599 "sha256", 00:23:31.599 "sha384", 00:23:31.599 "sha512" 00:23:31.599 ], 00:23:31.599 "dhchap_dhgroups": [ 00:23:31.599 "null", 00:23:31.599 "ffdhe2048", 00:23:31.599 "ffdhe3072", 00:23:31.599 "ffdhe4096", 00:23:31.599 "ffdhe6144", 00:23:31.599 "ffdhe8192" 00:23:31.599 ] 00:23:31.599 } 00:23:31.599 }, 00:23:31.599 { 00:23:31.599 "method": "bdev_nvme_attach_controller", 00:23:31.599 "params": { 00:23:31.599 "name": "nvme0", 00:23:31.599 "trtype": "TCP", 00:23:31.599 "adrfam": "IPv4", 00:23:31.599 "traddr": "10.0.0.2", 00:23:31.599 "trsvcid": "4420", 00:23:31.599 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.600 "prchk_reftag": false, 00:23:31.600 "prchk_guard": false, 00:23:31.600 "ctrlr_loss_timeout_sec": 0, 00:23:31.600 "reconnect_delay_sec": 0, 00:23:31.600 "fast_io_fail_timeout_sec": 0, 00:23:31.600 "psk": "key0", 00:23:31.600 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:31.600 "hdgst": false, 00:23:31.600 "ddgst": false 00:23:31.600 } 00:23:31.600 }, 00:23:31.600 { 00:23:31.600 "method": "bdev_nvme_set_hotplug", 00:23:31.600 "params": { 00:23:31.600 "period_us": 100000, 00:23:31.600 "enable": false 00:23:31.600 } 00:23:31.600 }, 00:23:31.600 { 00:23:31.600 "method": "bdev_enable_histogram", 00:23:31.600 "params": { 00:23:31.600 "name": "nvme0n1", 00:23:31.600 "enable": true 00:23:31.600 } 00:23:31.600 }, 00:23:31.600 { 00:23:31.600 "method": "bdev_wait_for_examine" 00:23:31.600 } 00:23:31.600 ] 00:23:31.600 }, 00:23:31.600 { 00:23:31.600 "subsystem": "nbd", 00:23:31.600 "config": [] 00:23:31.600 } 00:23:31.600 ] 00:23:31.600 }' 00:23:31.600 01:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.600 [2024-07-26 01:06:01.904612] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:23:31.600 [2024-07-26 01:06:01.904701] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1870811 ] 00:23:31.600 EAL: No free 2048 kB hugepages reported on node 1 00:23:31.600 [2024-07-26 01:06:01.963739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.859 [2024-07-26 01:06:02.049903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:31.859 [2024-07-26 01:06:02.228409] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:32.793 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:32.793 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:32.793 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:32.793 01:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:23:32.793 01:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.793 01:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:33.051 Running I/O for 1 seconds... 00:23:33.981 00:23:33.981 Latency(us) 00:23:33.981 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.981 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:33.981 Verification LBA range: start 0x0 length 0x2000 00:23:33.981 nvme0n1 : 1.02 3433.40 13.41 0.00 0.00 36887.75 6602.15 39030.33 00:23:33.981 =================================================================================================================== 00:23:33.981 Total : 3433.40 13.41 0.00 0.00 36887.75 6602.15 39030.33 00:23:33.981 0 00:23:33.981 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:23:33.981 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:23:33.981 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:33.981 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:23:33.981 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:23:33.981 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:23:33.981 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:33.981 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:23:33.981 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:23:33.981 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:23:33.982 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:33.982 nvmf_trace.0 00:23:33.982 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:23:33.982 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1870811 00:23:33.982 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1870811 ']' 00:23:33.982 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1870811 00:23:33.982 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:33.982 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:33.982 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1870811 00:23:34.240 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:34.240 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:34.240 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1870811' 00:23:34.240 killing process with pid 1870811 00:23:34.240 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1870811 00:23:34.240 Received shutdown signal, test time was about 1.000000 seconds 00:23:34.240 00:23:34.240 Latency(us) 00:23:34.240 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:34.240 =================================================================================================================== 00:23:34.240 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:34.240 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1870811 00:23:34.240 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:34.240 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:34.240 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:23:34.240 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:34.240 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:23:34.240 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:34.240 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:34.240 rmmod nvme_tcp 00:23:34.499 rmmod nvme_fabrics 00:23:34.499 rmmod nvme_keyring 00:23:34.499 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:34.499 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:23:34.499 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:23:34.499 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1870602 ']' 00:23:34.499 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1870602 00:23:34.499 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1870602 ']' 00:23:34.499 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1870602 00:23:34.499 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:34.499 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:34.499 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1870602 00:23:34.499 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:34.499 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:34.499 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1870602' 00:23:34.499 killing process with pid 1870602 00:23:34.499 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1870602 00:23:34.499 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1870602 00:23:34.756 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:34.756 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:34.756 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:34.756 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:34.756 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:34.756 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.756 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:34.756 01:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.656 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:36.656 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.EE0lMiz6hs /tmp/tmp.p5ehyOI4wd /tmp/tmp.lVO6vyF9ks 00:23:36.656 00:23:36.656 real 1m19.311s 00:23:36.656 user 2m4.212s 00:23:36.656 sys 0m25.685s 00:23:36.656 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:36.656 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.656 ************************************ 00:23:36.656 END TEST nvmf_tls 00:23:36.656 ************************************ 00:23:36.656 01:06:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:36.656 01:06:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:36.656 01:06:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:36.656 01:06:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:36.656 ************************************ 00:23:36.656 START TEST nvmf_fips 00:23:36.656 ************************************ 00:23:36.656 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:36.914 * Looking for test storage... 00:23:36.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:36.914 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:23:36.915 Error setting digest 00:23:36.915 002286ADD27F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:36.915 002286ADD27F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:36.915 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.916 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:36.916 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:36.916 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:23:36.916 01:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:38.814 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:38.814 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:38.814 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:38.814 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:38.814 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:39.073 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:39.073 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:39.073 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:39.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:39.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:23:39.073 00:23:39.073 --- 10.0.0.2 ping statistics --- 00:23:39.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.073 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:23:39.073 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:39.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:39.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:23:39.073 00:23:39.073 --- 10.0.0.1 ping statistics --- 00:23:39.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.073 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:23:39.073 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:39.073 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:23:39.073 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:39.073 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:39.073 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:39.073 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:39.073 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:39.073 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:39.073 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:39.073 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:23:39.073 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:39.073 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:39.073 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:39.073 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1873558 00:23:39.073 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:39.073 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1873558 00:23:39.073 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1873558 ']' 00:23:39.073 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.073 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:39.073 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.074 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:39.074 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:39.074 [2024-07-26 01:06:09.368593] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:23:39.074 [2024-07-26 01:06:09.368679] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.074 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.074 [2024-07-26 01:06:09.435297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.332 [2024-07-26 01:06:09.526081] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.332 [2024-07-26 01:06:09.526161] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.332 [2024-07-26 01:06:09.526188] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.332 [2024-07-26 01:06:09.526202] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.332 [2024-07-26 01:06:09.526214] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.332 [2024-07-26 01:06:09.526243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.332 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:39.332 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:23:39.332 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:39.332 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:39.332 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:39.332 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.332 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:23:39.332 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:39.332 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:39.332 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:39.332 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:39.332 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:39.332 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:39.332 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:39.590 [2024-07-26 01:06:09.902517] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.590 [2024-07-26 01:06:09.918500] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:39.590 [2024-07-26 01:06:09.918748] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:39.590 [2024-07-26 01:06:09.950996] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:39.590 malloc0 00:23:39.590 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:39.590 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1873702 00:23:39.590 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:39.590 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1873702 /var/tmp/bdevperf.sock 00:23:39.590 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1873702 ']' 00:23:39.590 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:39.590 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:39.590 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:39.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:39.590 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:39.590 01:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:39.850 [2024-07-26 01:06:10.044904] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:23:39.850 [2024-07-26 01:06:10.045000] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1873702 ] 00:23:39.850 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.850 [2024-07-26 01:06:10.104419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.850 [2024-07-26 01:06:10.187477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:40.110 01:06:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:40.110 01:06:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:23:40.110 01:06:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:40.110 [2024-07-26 01:06:10.524217] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:40.111 [2024-07-26 01:06:10.524338] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:40.369 TLSTESTn1 00:23:40.369 01:06:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:40.369 Running I/O for 10 seconds... 00:23:50.380 00:23:50.380 Latency(us) 00:23:50.380 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.380 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:50.380 Verification LBA range: start 0x0 length 0x2000 00:23:50.380 TLSTESTn1 : 10.03 3215.43 12.56 0.00 0.00 39736.74 11650.84 57477.50 00:23:50.380 =================================================================================================================== 00:23:50.380 Total : 3215.43 12.56 0.00 0.00 39736.74 11650.84 57477.50 00:23:50.380 0 00:23:50.380 01:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:50.380 01:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:50.380 01:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:23:50.380 01:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:23:50.380 01:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:23:50.380 01:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:50.380 01:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:23:50.380 01:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:23:50.380 01:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:23:50.380 01:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:50.380 nvmf_trace.0 00:23:50.639 01:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:23:50.639 01:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1873702 00:23:50.639 01:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1873702 ']' 00:23:50.639 01:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1873702 00:23:50.639 01:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:23:50.639 01:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:50.639 01:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1873702 00:23:50.639 01:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:50.639 01:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:50.639 01:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1873702' 00:23:50.639 killing process with pid 1873702 00:23:50.639 01:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1873702 00:23:50.639 Received shutdown signal, test time was about 10.000000 seconds 00:23:50.639 00:23:50.639 Latency(us) 00:23:50.639 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.639 =================================================================================================================== 00:23:50.639 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:50.639 [2024-07-26 01:06:20.899244] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:50.639 01:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1873702 00:23:50.898 01:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:50.898 01:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:50.898 01:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:23:50.898 01:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:50.898 01:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:23:50.898 01:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:50.898 01:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:50.899 rmmod nvme_tcp 00:23:50.899 rmmod nvme_fabrics 00:23:50.899 rmmod nvme_keyring 00:23:50.899 01:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:50.899 01:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:23:50.899 01:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:23:50.899 01:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1873558 ']' 00:23:50.899 01:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1873558 00:23:50.899 01:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1873558 ']' 00:23:50.899 01:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1873558 00:23:50.899 01:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:23:50.899 01:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:50.899 01:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1873558 00:23:50.899 01:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:50.899 01:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:50.899 01:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1873558' 00:23:50.899 killing process with pid 1873558 00:23:50.899 01:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1873558 00:23:50.899 [2024-07-26 01:06:21.199278] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:50.899 01:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1873558 00:23:51.157 01:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:51.157 01:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:51.157 01:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:51.157 01:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:51.157 01:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:51.157 01:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.157 01:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:51.157 01:06:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:53.693 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:53.693 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:53.693 00:23:53.693 real 0m16.434s 00:23:53.693 user 0m20.843s 00:23:53.693 sys 0m5.819s 00:23:53.693 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:53.693 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:53.693 ************************************ 00:23:53.693 END TEST nvmf_fips 00:23:53.693 ************************************ 00:23:53.693 01:06:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 1 -eq 1 ']' 00:23:53.693 01:06:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@46 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:53.693 01:06:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:53.693 01:06:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:53.693 01:06:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:53.693 ************************************ 00:23:53.694 START TEST nvmf_fuzz 00:23:53.694 ************************************ 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:53.694 * Looking for test storage... 00:23:53.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:23:53.694 01:06:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:55.075 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:55.075 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.075 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:55.076 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:55.076 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.076 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:55.076 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.076 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:55.076 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:55.076 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:55.076 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:55.076 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.076 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:55.076 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:55.076 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.076 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:55.076 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:23:55.076 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:55.076 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:55.076 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:55.076 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:55.076 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:55.076 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:55.076 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:55.076 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:55.076 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:55.076 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:55.076 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:55.076 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:55.076 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:55.076 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:55.076 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:55.076 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:55.076 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:55.076 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:55.076 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:55.076 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:55.334 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:55.334 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:55.334 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:55.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:55.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:23:55.334 00:23:55.334 --- 10.0.0.2 ping statistics --- 00:23:55.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.334 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:23:55.334 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:55.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:55.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:23:55.334 00:23:55.334 --- 10.0.0.1 ping statistics --- 00:23:55.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.334 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:23:55.334 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:55.334 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:23:55.334 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:55.334 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:55.334 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:55.334 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:55.334 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:55.334 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:55.334 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:55.334 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1876830 00:23:55.335 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:55.335 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1876830 00:23:55.335 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 1876830 ']' 00:23:55.335 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:55.335 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.335 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:55.335 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.335 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:55.335 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:55.595 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:55.595 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:23:55.595 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:55.595 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.595 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:55.595 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.595 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:23:55.595 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.595 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:55.595 Malloc0 00:23:55.595 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.595 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:55.595 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.595 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:55.595 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.595 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:55.595 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.595 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:55.595 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.595 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:55.595 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.595 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:55.595 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.595 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:23:55.595 01:06:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:27.665 Fuzzing completed. Shutting down the fuzz application 00:24:27.665 00:24:27.665 Dumping successful admin opcodes: 00:24:27.665 8, 9, 10, 24, 00:24:27.665 Dumping successful io opcodes: 00:24:27.665 0, 9, 00:24:27.665 NS: 0x200003aeff00 I/O qp, Total commands completed: 464303, total successful commands: 2684, random_seed: 2097031168 00:24:27.665 NS: 0x200003aeff00 admin qp, Total commands completed: 56720, total successful commands: 450, random_seed: 2647604992 00:24:27.665 01:06:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:27.665 Fuzzing completed. Shutting down the fuzz application 00:24:27.665 00:24:27.665 Dumping successful admin opcodes: 00:24:27.665 24, 00:24:27.665 Dumping successful io opcodes: 00:24:27.665 00:24:27.665 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 4167605795 00:24:27.665 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 4167739631 00:24:27.665 01:06:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:27.665 01:06:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.665 01:06:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:27.665 01:06:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.665 01:06:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:27.665 01:06:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:27.665 01:06:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:27.665 01:06:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:24:27.665 01:06:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:27.665 01:06:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:24:27.665 01:06:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:27.665 01:06:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:27.665 rmmod nvme_tcp 00:24:27.665 rmmod nvme_fabrics 00:24:27.665 rmmod nvme_keyring 00:24:27.665 01:06:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:27.665 01:06:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:24:27.665 01:06:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:24:27.665 01:06:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 1876830 ']' 00:24:27.665 01:06:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 1876830 00:24:27.665 01:06:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 1876830 ']' 00:24:27.665 01:06:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 1876830 00:24:27.665 01:06:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:24:27.665 01:06:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:27.665 01:06:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1876830 00:24:27.665 01:06:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:27.665 01:06:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:27.665 01:06:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1876830' 00:24:27.665 killing process with pid 1876830 00:24:27.665 01:06:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 1876830 00:24:27.665 01:06:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 1876830 00:24:27.925 01:06:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:27.926 01:06:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:27.926 01:06:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:27.926 01:06:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:27.926 01:06:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:27.926 01:06:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.926 01:06:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:27.926 01:06:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.830 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:29.830 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:29.830 00:24:29.830 real 0m36.705s 00:24:29.830 user 0m50.428s 00:24:29.830 sys 0m15.361s 00:24:29.830 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:29.830 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:29.830 ************************************ 00:24:29.830 END TEST nvmf_fuzz 00:24:29.830 ************************************ 00:24:30.088 01:07:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:30.088 01:07:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:30.088 01:07:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:30.088 01:07:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:30.088 ************************************ 00:24:30.088 START TEST nvmf_multiconnection 00:24:30.088 ************************************ 00:24:30.088 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:30.089 * Looking for test storage... 00:24:30.089 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:24:30.089 01:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.989 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:31.989 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:24:31.989 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:31.989 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:31.989 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:31.989 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:31.989 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:31.989 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:24:31.989 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:31.989 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:24:31.989 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:24:31.989 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:24:31.989 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:24:31.989 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:24:31.989 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:24:31.989 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:31.989 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:31.989 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:31.989 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:31.989 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:31.989 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:31.989 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:31.989 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:31.989 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:31.989 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:31.990 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:31.990 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:31.990 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:31.990 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:31.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:31.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:24:31.990 00:24:31.990 --- 10.0.0.2 ping statistics --- 00:24:31.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.990 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:31.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:31.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:24:31.990 00:24:31.990 --- 10.0.0.1 ping statistics --- 00:24:31.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.990 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=1882547 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 1882547 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 1882547 ']' 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:31.990 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.247 [2024-07-26 01:07:02.463269] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:24:32.247 [2024-07-26 01:07:02.463353] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:32.247 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.247 [2024-07-26 01:07:02.537915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:32.247 [2024-07-26 01:07:02.631219] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:32.247 [2024-07-26 01:07:02.631276] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:32.247 [2024-07-26 01:07:02.631302] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:32.247 [2024-07-26 01:07:02.631316] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:32.247 [2024-07-26 01:07:02.631328] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:32.247 [2024-07-26 01:07:02.631418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:32.247 [2024-07-26 01:07:02.631472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:32.247 [2024-07-26 01:07:02.631587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:32.247 [2024-07-26 01:07:02.631589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.504 [2024-07-26 01:07:02.771243] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.504 Malloc1 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.504 [2024-07-26 01:07:02.826131] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.504 Malloc2 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.504 Malloc3 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.504 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.764 Malloc4 00:24:32.764 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.764 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:32.764 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.764 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.764 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.764 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:32.764 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.764 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.764 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.764 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:32.764 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.764 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.764 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.764 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:32.764 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:32.764 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.764 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.764 Malloc5 00:24:32.764 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.764 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:32.764 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.764 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.764 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.764 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:32.764 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.764 01:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.764 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.764 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:32.764 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.764 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.764 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.764 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:32.764 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:32.764 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.764 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.764 Malloc6 00:24:32.764 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.764 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:32.764 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.764 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.764 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.764 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:32.764 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.764 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.764 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.764 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:32.764 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.764 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.764 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.765 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:32.765 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:32.765 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.765 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.765 Malloc7 00:24:32.765 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.765 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:32.765 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.765 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.765 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.765 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:32.765 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.765 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.765 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.765 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:32.765 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.765 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.765 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.765 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:32.765 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:32.765 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.765 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.765 Malloc8 00:24:32.765 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.765 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:32.765 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.765 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.765 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.765 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:32.765 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.765 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.765 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.765 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:32.765 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.765 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.765 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.765 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:32.765 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:32.765 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.765 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:33.025 Malloc9 00:24:33.025 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.025 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:33.025 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.025 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:33.025 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.025 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:33.025 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.025 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:33.025 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.025 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:33.025 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.025 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:33.025 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.025 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:33.025 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:33.025 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.025 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:33.025 Malloc10 00:24:33.025 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.025 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:33.025 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.025 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:33.025 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.025 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:33.025 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.025 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:33.025 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.025 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:33.025 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.025 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:33.025 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.025 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:33.025 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:33.025 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.025 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:33.025 Malloc11 00:24:33.025 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.025 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:33.025 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.025 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:33.025 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.026 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:33.026 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.026 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:33.026 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.026 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:33.026 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.026 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:33.026 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.026 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:33.026 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:33.026 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:33.605 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:33.605 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:33.605 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:33.605 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:33.605 01:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:36.168 01:07:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:36.168 01:07:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:36.168 01:07:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:24:36.168 01:07:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:36.168 01:07:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:36.168 01:07:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:36.168 01:07:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:36.168 01:07:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:24:36.428 01:07:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:36.428 01:07:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:36.428 01:07:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:36.428 01:07:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:36.428 01:07:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:38.333 01:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:38.334 01:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:38.334 01:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:24:38.334 01:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:38.334 01:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:38.334 01:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:38.334 01:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:38.334 01:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:24:39.270 01:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:39.270 01:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:39.270 01:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:39.270 01:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:39.270 01:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:41.169 01:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:41.169 01:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:41.169 01:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:24:41.169 01:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:41.169 01:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:41.169 01:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:41.169 01:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:41.169 01:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:24:41.736 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:41.736 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:41.736 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:41.736 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:41.736 01:07:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:44.275 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:44.275 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:44.275 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:24:44.275 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:44.275 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:44.275 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:44.275 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:44.275 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:24:44.840 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:44.840 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:44.840 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:44.840 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:44.840 01:07:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:46.744 01:07:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:46.744 01:07:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:46.744 01:07:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:24:46.744 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:46.744 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:46.744 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:46.744 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:46.744 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:24:47.311 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:47.311 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:47.311 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:47.311 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:47.311 01:07:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:49.847 01:07:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:49.847 01:07:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:49.847 01:07:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:24:49.847 01:07:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:49.847 01:07:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:49.847 01:07:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:49.847 01:07:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:49.847 01:07:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:24:50.414 01:07:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:24:50.414 01:07:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:50.414 01:07:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:50.414 01:07:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:50.414 01:07:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:52.316 01:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:52.316 01:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:52.316 01:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:24:52.316 01:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:52.316 01:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:52.316 01:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:52.316 01:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:52.316 01:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:24:53.256 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:24:53.256 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:53.256 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:53.256 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:53.256 01:07:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:55.162 01:07:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:55.162 01:07:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:55.162 01:07:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:24:55.162 01:07:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:55.162 01:07:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:55.162 01:07:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:55.162 01:07:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.162 01:07:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:24:56.098 01:07:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:24:56.098 01:07:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:56.098 01:07:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:56.098 01:07:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:56.098 01:07:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:58.630 01:07:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:58.630 01:07:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:58.630 01:07:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:24:58.630 01:07:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:58.630 01:07:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:58.630 01:07:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:58.630 01:07:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:58.630 01:07:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:24:59.198 01:07:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:24:59.198 01:07:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:59.198 01:07:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:59.198 01:07:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:59.198 01:07:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:01.100 01:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:01.100 01:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:01.100 01:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:25:01.100 01:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:01.100 01:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:01.100 01:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:01.100 01:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:01.100 01:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:02.034 01:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:02.034 01:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:02.034 01:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:02.034 01:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:02.034 01:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:03.936 01:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:03.936 01:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:03.936 01:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:25:03.936 01:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:03.936 01:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:03.936 01:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:03.936 01:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:03.936 [global] 00:25:03.936 thread=1 00:25:03.936 invalidate=1 00:25:03.936 rw=read 00:25:03.936 time_based=1 00:25:03.936 runtime=10 00:25:03.936 ioengine=libaio 00:25:03.936 direct=1 00:25:03.936 bs=262144 00:25:03.936 iodepth=64 00:25:03.936 norandommap=1 00:25:03.936 numjobs=1 00:25:03.936 00:25:03.936 [job0] 00:25:03.936 filename=/dev/nvme0n1 00:25:03.936 [job1] 00:25:03.936 filename=/dev/nvme10n1 00:25:03.936 [job2] 00:25:03.936 filename=/dev/nvme1n1 00:25:03.936 [job3] 00:25:03.936 filename=/dev/nvme2n1 00:25:03.936 [job4] 00:25:03.936 filename=/dev/nvme3n1 00:25:03.936 [job5] 00:25:03.936 filename=/dev/nvme4n1 00:25:03.936 [job6] 00:25:03.936 filename=/dev/nvme5n1 00:25:03.936 [job7] 00:25:03.936 filename=/dev/nvme6n1 00:25:03.936 [job8] 00:25:03.936 filename=/dev/nvme7n1 00:25:03.936 [job9] 00:25:03.936 filename=/dev/nvme8n1 00:25:03.936 [job10] 00:25:03.936 filename=/dev/nvme9n1 00:25:04.194 Could not set queue depth (nvme0n1) 00:25:04.194 Could not set queue depth (nvme10n1) 00:25:04.194 Could not set queue depth (nvme1n1) 00:25:04.194 Could not set queue depth (nvme2n1) 00:25:04.194 Could not set queue depth (nvme3n1) 00:25:04.194 Could not set queue depth (nvme4n1) 00:25:04.194 Could not set queue depth (nvme5n1) 00:25:04.194 Could not set queue depth (nvme6n1) 00:25:04.194 Could not set queue depth (nvme7n1) 00:25:04.194 Could not set queue depth (nvme8n1) 00:25:04.194 Could not set queue depth (nvme9n1) 00:25:04.194 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:04.194 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:04.194 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:04.194 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:04.194 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:04.194 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:04.194 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:04.194 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:04.194 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:04.194 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:04.194 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:04.194 fio-3.35 00:25:04.194 Starting 11 threads 00:25:16.402 00:25:16.402 job0: (groupid=0, jobs=1): err= 0: pid=1886829: Fri Jul 26 01:07:45 2024 00:25:16.402 read: IOPS=831, BW=208MiB/s (218MB/s)(2097MiB/10081msec) 00:25:16.402 slat (usec): min=9, max=150191, avg=859.28, stdev=3537.73 00:25:16.402 clat (msec): min=2, max=268, avg=75.99, stdev=48.06 00:25:16.402 lat (msec): min=2, max=285, avg=76.85, stdev=48.35 00:25:16.402 clat percentiles (msec): 00:25:16.402 | 1.00th=[ 7], 5.00th=[ 27], 10.00th=[ 32], 20.00th=[ 39], 00:25:16.402 | 30.00th=[ 47], 40.00th=[ 53], 50.00th=[ 60], 60.00th=[ 70], 00:25:16.402 | 70.00th=[ 85], 80.00th=[ 113], 90.00th=[ 148], 95.00th=[ 182], 00:25:16.402 | 99.00th=[ 220], 99.50th=[ 232], 99.90th=[ 245], 99.95th=[ 262], 00:25:16.402 | 99.99th=[ 271] 00:25:16.402 bw ( KiB/s): min=112640, max=408064, per=11.03%, avg=213021.60, stdev=95420.05, samples=20 00:25:16.402 iops : min= 440, max= 1594, avg=832.10, stdev=372.73, samples=20 00:25:16.403 lat (msec) : 4=0.11%, 10=1.55%, 20=1.42%, 50=31.81%, 100=40.26% 00:25:16.403 lat (msec) : 250=24.79%, 500=0.06% 00:25:16.403 cpu : usr=0.44%, sys=2.75%, ctx=1692, majf=0, minf=4097 00:25:16.403 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:16.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:16.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:16.403 issued rwts: total=8386,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:16.403 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:16.403 job1: (groupid=0, jobs=1): err= 0: pid=1886830: Fri Jul 26 01:07:45 2024 00:25:16.403 read: IOPS=719, BW=180MiB/s (189MB/s)(1813MiB/10082msec) 00:25:16.403 slat (usec): min=10, max=113941, avg=1250.69, stdev=4009.87 00:25:16.403 clat (msec): min=2, max=268, avg=87.63, stdev=40.30 00:25:16.403 lat (msec): min=2, max=268, avg=88.88, stdev=40.85 00:25:16.403 clat percentiles (msec): 00:25:16.403 | 1.00th=[ 26], 5.00th=[ 32], 10.00th=[ 38], 20.00th=[ 51], 00:25:16.403 | 30.00th=[ 62], 40.00th=[ 73], 50.00th=[ 85], 60.00th=[ 94], 00:25:16.403 | 70.00th=[ 106], 80.00th=[ 121], 90.00th=[ 144], 95.00th=[ 159], 00:25:16.403 | 99.00th=[ 209], 99.50th=[ 222], 99.90th=[ 247], 99.95th=[ 255], 00:25:16.403 | 99.99th=[ 268] 00:25:16.403 bw ( KiB/s): min=90624, max=411648, per=9.53%, avg=184038.35, stdev=74989.61, samples=20 00:25:16.403 iops : min= 354, max= 1608, avg=718.85, stdev=292.91, samples=20 00:25:16.403 lat (msec) : 4=0.01%, 10=0.25%, 20=0.46%, 50=18.78%, 100=45.59% 00:25:16.403 lat (msec) : 250=34.85%, 500=0.07% 00:25:16.403 cpu : usr=0.42%, sys=2.59%, ctx=1397, majf=0, minf=4097 00:25:16.403 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:25:16.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:16.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:16.403 issued rwts: total=7252,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:16.403 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:16.403 job2: (groupid=0, jobs=1): err= 0: pid=1886831: Fri Jul 26 01:07:45 2024 00:25:16.403 read: IOPS=807, BW=202MiB/s (212MB/s)(2042MiB/10120msec) 00:25:16.403 slat (usec): min=9, max=128401, avg=1007.82, stdev=3688.01 00:25:16.403 clat (msec): min=5, max=280, avg=78.21, stdev=48.64 00:25:16.403 lat (msec): min=5, max=280, avg=79.22, stdev=49.24 00:25:16.403 clat percentiles (msec): 00:25:16.403 | 1.00th=[ 25], 5.00th=[ 28], 10.00th=[ 29], 20.00th=[ 31], 00:25:16.403 | 30.00th=[ 34], 40.00th=[ 41], 50.00th=[ 70], 60.00th=[ 88], 00:25:16.403 | 70.00th=[ 113], 80.00th=[ 128], 90.00th=[ 146], 95.00th=[ 159], 00:25:16.403 | 99.00th=[ 201], 99.50th=[ 209], 99.90th=[ 220], 99.95th=[ 222], 00:25:16.403 | 99.99th=[ 279] 00:25:16.403 bw ( KiB/s): min=115712, max=521216, per=10.75%, avg=207462.40, stdev=136518.08, samples=20 00:25:16.403 iops : min= 452, max= 2036, avg=810.40, stdev=533.27, samples=20 00:25:16.403 lat (msec) : 10=0.06%, 20=0.61%, 50=42.37%, 100=21.10%, 250=35.85% 00:25:16.403 lat (msec) : 500=0.01% 00:25:16.403 cpu : usr=0.59%, sys=2.49%, ctx=1657, majf=0, minf=4097 00:25:16.403 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:16.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:16.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:16.403 issued rwts: total=8167,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:16.403 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:16.403 job3: (groupid=0, jobs=1): err= 0: pid=1886832: Fri Jul 26 01:07:45 2024 00:25:16.403 read: IOPS=628, BW=157MiB/s (165MB/s)(1590MiB/10114msec) 00:25:16.403 slat (usec): min=9, max=114709, avg=659.30, stdev=4206.57 00:25:16.403 clat (msec): min=4, max=306, avg=101.04, stdev=52.29 00:25:16.403 lat (msec): min=4, max=306, avg=101.70, stdev=52.75 00:25:16.403 clat percentiles (msec): 00:25:16.403 | 1.00th=[ 10], 5.00th=[ 26], 10.00th=[ 36], 20.00th=[ 57], 00:25:16.403 | 30.00th=[ 68], 40.00th=[ 80], 50.00th=[ 95], 60.00th=[ 115], 00:25:16.403 | 70.00th=[ 130], 80.00th=[ 142], 90.00th=[ 171], 95.00th=[ 203], 00:25:16.403 | 99.00th=[ 234], 99.50th=[ 241], 99.90th=[ 284], 99.95th=[ 284], 00:25:16.403 | 99.99th=[ 305] 00:25:16.403 bw ( KiB/s): min=114688, max=233984, per=8.35%, avg=161158.80, stdev=35548.76, samples=20 00:25:16.403 iops : min= 448, max= 914, avg=629.50, stdev=138.84, samples=20 00:25:16.403 lat (msec) : 10=1.05%, 20=2.25%, 50=13.01%, 100=35.96%, 250=47.40% 00:25:16.403 lat (msec) : 500=0.33% 00:25:16.403 cpu : usr=0.30%, sys=1.67%, ctx=1532, majf=0, minf=3721 00:25:16.403 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:16.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:16.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:16.403 issued rwts: total=6359,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:16.403 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:16.403 job4: (groupid=0, jobs=1): err= 0: pid=1886833: Fri Jul 26 01:07:45 2024 00:25:16.403 read: IOPS=676, BW=169MiB/s (177MB/s)(1711MiB/10123msec) 00:25:16.403 slat (usec): min=9, max=146475, avg=806.17, stdev=4725.69 00:25:16.403 clat (usec): min=1329, max=356751, avg=93757.83, stdev=53006.66 00:25:16.403 lat (usec): min=1352, max=356985, avg=94564.00, stdev=53631.67 00:25:16.403 clat percentiles (msec): 00:25:16.403 | 1.00th=[ 7], 5.00th=[ 25], 10.00th=[ 35], 20.00th=[ 49], 00:25:16.403 | 30.00th=[ 58], 40.00th=[ 68], 50.00th=[ 81], 60.00th=[ 100], 00:25:16.403 | 70.00th=[ 126], 80.00th=[ 142], 90.00th=[ 165], 95.00th=[ 188], 00:25:16.403 | 99.00th=[ 236], 99.50th=[ 247], 99.90th=[ 284], 99.95th=[ 292], 00:25:16.403 | 99.99th=[ 359] 00:25:16.403 bw ( KiB/s): min=70797, max=275968, per=8.99%, avg=173606.45, stdev=58323.02, samples=20 00:25:16.403 iops : min= 276, max= 1078, avg=678.10, stdev=227.87, samples=20 00:25:16.403 lat (msec) : 2=0.13%, 4=0.50%, 10=1.05%, 20=2.16%, 50=17.33% 00:25:16.403 lat (msec) : 100=39.28%, 250=39.28%, 500=0.26% 00:25:16.403 cpu : usr=0.34%, sys=1.73%, ctx=1566, majf=0, minf=4097 00:25:16.403 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:16.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:16.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:16.403 issued rwts: total=6845,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:16.403 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:16.403 job5: (groupid=0, jobs=1): err= 0: pid=1886834: Fri Jul 26 01:07:45 2024 00:25:16.403 read: IOPS=581, BW=145MiB/s (152MB/s)(1471MiB/10125msec) 00:25:16.403 slat (usec): min=10, max=116599, avg=1330.09, stdev=5074.83 00:25:16.403 clat (msec): min=2, max=319, avg=108.68, stdev=59.72 00:25:16.403 lat (msec): min=2, max=335, avg=110.01, stdev=60.42 00:25:16.403 clat percentiles (msec): 00:25:16.403 | 1.00th=[ 9], 5.00th=[ 19], 10.00th=[ 43], 20.00th=[ 58], 00:25:16.403 | 30.00th=[ 69], 40.00th=[ 80], 50.00th=[ 96], 60.00th=[ 120], 00:25:16.403 | 70.00th=[ 146], 80.00th=[ 165], 90.00th=[ 188], 95.00th=[ 213], 00:25:16.403 | 99.00th=[ 259], 99.50th=[ 305], 99.90th=[ 309], 99.95th=[ 309], 00:25:16.403 | 99.99th=[ 321] 00:25:16.403 bw ( KiB/s): min=86528, max=252928, per=7.72%, avg=149017.60, stdev=55467.77, samples=20 00:25:16.403 iops : min= 338, max= 988, avg=582.10, stdev=216.67, samples=20 00:25:16.403 lat (msec) : 4=0.07%, 10=1.60%, 20=3.93%, 50=7.32%, 100=38.88% 00:25:16.403 lat (msec) : 250=46.83%, 500=1.38% 00:25:16.403 cpu : usr=0.39%, sys=2.04%, ctx=1333, majf=0, minf=4097 00:25:16.403 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:16.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:16.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:16.403 issued rwts: total=5885,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:16.403 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:16.403 job6: (groupid=0, jobs=1): err= 0: pid=1886835: Fri Jul 26 01:07:45 2024 00:25:16.403 read: IOPS=477, BW=119MiB/s (125MB/s)(1208MiB/10122msec) 00:25:16.403 slat (usec): min=12, max=154656, avg=2042.97, stdev=6564.26 00:25:16.403 clat (msec): min=8, max=323, avg=131.88, stdev=46.58 00:25:16.403 lat (msec): min=8, max=382, avg=133.93, stdev=47.33 00:25:16.403 clat percentiles (msec): 00:25:16.403 | 1.00th=[ 61], 5.00th=[ 75], 10.00th=[ 81], 20.00th=[ 90], 00:25:16.403 | 30.00th=[ 100], 40.00th=[ 109], 50.00th=[ 125], 60.00th=[ 140], 00:25:16.403 | 70.00th=[ 155], 80.00th=[ 171], 90.00th=[ 203], 95.00th=[ 224], 00:25:16.403 | 99.00th=[ 249], 99.50th=[ 259], 99.90th=[ 288], 99.95th=[ 292], 00:25:16.403 | 99.99th=[ 326] 00:25:16.403 bw ( KiB/s): min=72192, max=196096, per=6.32%, avg=122048.15, stdev=37203.50, samples=20 00:25:16.403 iops : min= 282, max= 766, avg=476.75, stdev=145.33, samples=20 00:25:16.403 lat (msec) : 10=0.08%, 20=0.12%, 50=0.23%, 100=31.27%, 250=67.51% 00:25:16.403 lat (msec) : 500=0.79% 00:25:16.403 cpu : usr=0.36%, sys=1.78%, ctx=966, majf=0, minf=4097 00:25:16.403 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:16.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:16.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:16.403 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:16.403 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:16.403 job7: (groupid=0, jobs=1): err= 0: pid=1886836: Fri Jul 26 01:07:45 2024 00:25:16.403 read: IOPS=580, BW=145MiB/s (152MB/s)(1469MiB/10126msec) 00:25:16.403 slat (usec): min=9, max=165664, avg=1111.63, stdev=5487.32 00:25:16.403 clat (usec): min=1626, max=378210, avg=109099.04, stdev=52036.10 00:25:16.403 lat (usec): min=1681, max=379840, avg=110210.67, stdev=52768.35 00:25:16.403 clat percentiles (msec): 00:25:16.403 | 1.00th=[ 13], 5.00th=[ 32], 10.00th=[ 50], 20.00th=[ 69], 00:25:16.403 | 30.00th=[ 82], 40.00th=[ 89], 50.00th=[ 100], 60.00th=[ 112], 00:25:16.403 | 70.00th=[ 132], 80.00th=[ 155], 90.00th=[ 182], 95.00th=[ 213], 00:25:16.403 | 99.00th=[ 234], 99.50th=[ 239], 99.90th=[ 300], 99.95th=[ 380], 00:25:16.403 | 99.99th=[ 380] 00:25:16.403 bw ( KiB/s): min=61952, max=264192, per=7.70%, avg=148749.20, stdev=49672.78, samples=20 00:25:16.403 iops : min= 242, max= 1032, avg=581.05, stdev=194.04, samples=20 00:25:16.403 lat (msec) : 2=0.02%, 4=0.31%, 10=0.46%, 20=1.84%, 50=7.64% 00:25:16.403 lat (msec) : 100=40.23%, 250=49.37%, 500=0.14% 00:25:16.404 cpu : usr=0.27%, sys=2.00%, ctx=1364, majf=0, minf=4097 00:25:16.404 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:16.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:16.404 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:16.404 issued rwts: total=5874,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:16.404 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:16.404 job8: (groupid=0, jobs=1): err= 0: pid=1886837: Fri Jul 26 01:07:45 2024 00:25:16.404 read: IOPS=607, BW=152MiB/s (159MB/s)(1539MiB/10127msec) 00:25:16.404 slat (usec): min=10, max=109248, avg=1331.08, stdev=4571.96 00:25:16.404 clat (msec): min=2, max=274, avg=103.87, stdev=47.87 00:25:16.404 lat (msec): min=2, max=286, avg=105.20, stdev=48.59 00:25:16.404 clat percentiles (msec): 00:25:16.404 | 1.00th=[ 13], 5.00th=[ 35], 10.00th=[ 48], 20.00th=[ 61], 00:25:16.404 | 30.00th=[ 73], 40.00th=[ 88], 50.00th=[ 99], 60.00th=[ 109], 00:25:16.404 | 70.00th=[ 131], 80.00th=[ 148], 90.00th=[ 167], 95.00th=[ 184], 00:25:16.404 | 99.00th=[ 236], 99.50th=[ 245], 99.90th=[ 275], 99.95th=[ 275], 00:25:16.404 | 99.99th=[ 275] 00:25:16.404 bw ( KiB/s): min=84992, max=315392, per=8.08%, avg=155910.15, stdev=59154.21, samples=20 00:25:16.404 iops : min= 332, max= 1232, avg=609.00, stdev=231.05, samples=20 00:25:16.404 lat (msec) : 4=0.05%, 10=0.49%, 20=1.15%, 50=8.79%, 100=42.47% 00:25:16.404 lat (msec) : 250=46.74%, 500=0.31% 00:25:16.404 cpu : usr=0.47%, sys=1.80%, ctx=1332, majf=0, minf=4097 00:25:16.404 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:16.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:16.404 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:16.404 issued rwts: total=6155,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:16.404 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:16.404 job9: (groupid=0, jobs=1): err= 0: pid=1886838: Fri Jul 26 01:07:45 2024 00:25:16.404 read: IOPS=798, BW=200MiB/s (209MB/s)(2020MiB/10117msec) 00:25:16.404 slat (usec): min=9, max=197987, avg=770.42, stdev=4547.56 00:25:16.404 clat (usec): min=895, max=324523, avg=79300.90, stdev=57694.84 00:25:16.404 lat (usec): min=921, max=392706, avg=80071.32, stdev=58284.05 00:25:16.404 clat percentiles (msec): 00:25:16.404 | 1.00th=[ 4], 5.00th=[ 9], 10.00th=[ 20], 20.00th=[ 31], 00:25:16.404 | 30.00th=[ 43], 40.00th=[ 58], 50.00th=[ 68], 60.00th=[ 79], 00:25:16.404 | 70.00th=[ 91], 80.00th=[ 117], 90.00th=[ 163], 95.00th=[ 211], 00:25:16.404 | 99.00th=[ 259], 99.50th=[ 266], 99.90th=[ 279], 99.95th=[ 284], 00:25:16.404 | 99.99th=[ 326] 00:25:16.404 bw ( KiB/s): min=63488, max=334336, per=10.63%, avg=205163.15, stdev=81002.23, samples=20 00:25:16.404 iops : min= 248, max= 1306, avg=801.40, stdev=316.41, samples=20 00:25:16.404 lat (usec) : 1000=0.26% 00:25:16.404 lat (msec) : 2=0.62%, 4=0.54%, 10=4.27%, 20=4.61%, 50=24.15% 00:25:16.404 lat (msec) : 100=40.36%, 250=23.92%, 500=1.28% 00:25:16.404 cpu : usr=0.43%, sys=2.36%, ctx=1746, majf=0, minf=4097 00:25:16.404 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:16.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:16.404 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:16.404 issued rwts: total=8078,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:16.404 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:16.404 job10: (groupid=0, jobs=1): err= 0: pid=1886839: Fri Jul 26 01:07:45 2024 00:25:16.404 read: IOPS=852, BW=213MiB/s (223MB/s)(2135MiB/10015msec) 00:25:16.404 slat (usec): min=10, max=185965, avg=1045.24, stdev=4163.71 00:25:16.404 clat (msec): min=3, max=256, avg=73.97, stdev=46.74 00:25:16.404 lat (msec): min=3, max=398, avg=75.02, stdev=47.35 00:25:16.404 clat percentiles (msec): 00:25:16.404 | 1.00th=[ 19], 5.00th=[ 33], 10.00th=[ 35], 20.00th=[ 36], 00:25:16.404 | 30.00th=[ 38], 40.00th=[ 42], 50.00th=[ 60], 60.00th=[ 77], 00:25:16.404 | 70.00th=[ 91], 80.00th=[ 110], 90.00th=[ 140], 95.00th=[ 157], 00:25:16.404 | 99.00th=[ 234], 99.50th=[ 243], 99.90th=[ 255], 99.95th=[ 255], 00:25:16.404 | 99.99th=[ 257] 00:25:16.404 bw ( KiB/s): min=100040, max=445440, per=11.24%, avg=216970.00, stdev=113471.15, samples=20 00:25:16.404 iops : min= 390, max= 1740, avg=847.50, stdev=443.29, samples=20 00:25:16.404 lat (msec) : 4=0.01%, 10=0.15%, 20=1.22%, 50=44.55%, 100=29.82% 00:25:16.404 lat (msec) : 250=23.94%, 500=0.30% 00:25:16.404 cpu : usr=0.50%, sys=2.85%, ctx=1693, majf=0, minf=4097 00:25:16.404 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:25:16.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:16.404 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:16.404 issued rwts: total=8538,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:16.404 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:16.404 00:25:16.404 Run status group 0 (all jobs): 00:25:16.404 READ: bw=1885MiB/s (1977MB/s), 119MiB/s-213MiB/s (125MB/s-223MB/s), io=18.6GiB (20.0GB), run=10015-10127msec 00:25:16.404 00:25:16.404 Disk stats (read/write): 00:25:16.404 nvme0n1: ios=16603/0, merge=0/0, ticks=1242170/0, in_queue=1242170, util=97.26% 00:25:16.404 nvme10n1: ios=14289/0, merge=0/0, ticks=1234946/0, in_queue=1234946, util=97.47% 00:25:16.404 nvme1n1: ios=16140/0, merge=0/0, ticks=1237003/0, in_queue=1237003, util=97.72% 00:25:16.404 nvme2n1: ios=12539/0, merge=0/0, ticks=1246319/0, in_queue=1246319, util=97.87% 00:25:16.404 nvme3n1: ios=13521/0, merge=0/0, ticks=1240418/0, in_queue=1240418, util=97.94% 00:25:16.404 nvme4n1: ios=11596/0, merge=0/0, ticks=1233091/0, in_queue=1233091, util=98.26% 00:25:16.404 nvme5n1: ios=9487/0, merge=0/0, ticks=1228428/0, in_queue=1228428, util=98.44% 00:25:16.404 nvme6n1: ios=11572/0, merge=0/0, ticks=1237203/0, in_queue=1237203, util=98.53% 00:25:16.404 nvme7n1: ios=12146/0, merge=0/0, ticks=1234220/0, in_queue=1234220, util=98.94% 00:25:16.404 nvme8n1: ios=15792/0, merge=0/0, ticks=1229243/0, in_queue=1229243, util=99.10% 00:25:16.404 nvme9n1: ios=16757/0, merge=0/0, ticks=1241072/0, in_queue=1241072, util=99.20% 00:25:16.404 01:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:16.404 [global] 00:25:16.404 thread=1 00:25:16.404 invalidate=1 00:25:16.404 rw=randwrite 00:25:16.404 time_based=1 00:25:16.404 runtime=10 00:25:16.404 ioengine=libaio 00:25:16.404 direct=1 00:25:16.404 bs=262144 00:25:16.404 iodepth=64 00:25:16.404 norandommap=1 00:25:16.404 numjobs=1 00:25:16.404 00:25:16.404 [job0] 00:25:16.404 filename=/dev/nvme0n1 00:25:16.404 [job1] 00:25:16.404 filename=/dev/nvme10n1 00:25:16.404 [job2] 00:25:16.404 filename=/dev/nvme1n1 00:25:16.404 [job3] 00:25:16.404 filename=/dev/nvme2n1 00:25:16.404 [job4] 00:25:16.404 filename=/dev/nvme3n1 00:25:16.404 [job5] 00:25:16.404 filename=/dev/nvme4n1 00:25:16.404 [job6] 00:25:16.404 filename=/dev/nvme5n1 00:25:16.404 [job7] 00:25:16.404 filename=/dev/nvme6n1 00:25:16.404 [job8] 00:25:16.404 filename=/dev/nvme7n1 00:25:16.404 [job9] 00:25:16.404 filename=/dev/nvme8n1 00:25:16.404 [job10] 00:25:16.404 filename=/dev/nvme9n1 00:25:16.404 Could not set queue depth (nvme0n1) 00:25:16.404 Could not set queue depth (nvme10n1) 00:25:16.404 Could not set queue depth (nvme1n1) 00:25:16.404 Could not set queue depth (nvme2n1) 00:25:16.404 Could not set queue depth (nvme3n1) 00:25:16.404 Could not set queue depth (nvme4n1) 00:25:16.404 Could not set queue depth (nvme5n1) 00:25:16.404 Could not set queue depth (nvme6n1) 00:25:16.404 Could not set queue depth (nvme7n1) 00:25:16.404 Could not set queue depth (nvme8n1) 00:25:16.404 Could not set queue depth (nvme9n1) 00:25:16.404 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:16.404 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:16.404 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:16.404 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:16.404 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:16.404 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:16.404 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:16.404 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:16.404 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:16.404 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:16.404 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:16.404 fio-3.35 00:25:16.404 Starting 11 threads 00:25:26.383 00:25:26.383 job0: (groupid=0, jobs=1): err= 0: pid=1887864: Fri Jul 26 01:07:55 2024 00:25:26.383 write: IOPS=557, BW=139MiB/s (146MB/s)(1422MiB/10191msec); 0 zone resets 00:25:26.383 slat (usec): min=18, max=111460, avg=1309.94, stdev=4103.30 00:25:26.383 clat (usec): min=1212, max=472899, avg=113333.01, stdev=78828.94 00:25:26.383 lat (usec): min=1838, max=472938, avg=114642.94, stdev=79851.98 00:25:26.383 clat percentiles (msec): 00:25:26.383 | 1.00th=[ 7], 5.00th=[ 22], 10.00th=[ 36], 20.00th=[ 42], 00:25:26.383 | 30.00th=[ 45], 40.00th=[ 70], 50.00th=[ 91], 60.00th=[ 128], 00:25:26.383 | 70.00th=[ 157], 80.00th=[ 190], 90.00th=[ 232], 95.00th=[ 253], 00:25:26.383 | 99.00th=[ 313], 99.50th=[ 334], 99.90th=[ 460], 99.95th=[ 468], 00:25:26.383 | 99.99th=[ 472] 00:25:26.383 bw ( KiB/s): min=67206, max=374784, per=9.79%, avg=143955.50, stdev=79907.23, samples=20 00:25:26.383 iops : min= 262, max= 1464, avg=562.30, stdev=312.16, samples=20 00:25:26.383 lat (msec) : 2=0.04%, 4=0.35%, 10=1.37%, 20=2.87%, 50=28.35% 00:25:26.383 lat (msec) : 100=20.89%, 250=40.64%, 500=5.49% 00:25:26.383 cpu : usr=1.88%, sys=1.88%, ctx=3086, majf=0, minf=1 00:25:26.383 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:26.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:26.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:26.383 issued rwts: total=0,5686,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:26.383 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:26.383 job1: (groupid=0, jobs=1): err= 0: pid=1887876: Fri Jul 26 01:07:55 2024 00:25:26.383 write: IOPS=535, BW=134MiB/s (140MB/s)(1359MiB/10162msec); 0 zone resets 00:25:26.383 slat (usec): min=18, max=47984, avg=1262.06, stdev=3426.04 00:25:26.383 clat (msec): min=2, max=386, avg=118.25, stdev=61.30 00:25:26.383 lat (msec): min=2, max=386, avg=119.52, stdev=62.09 00:25:26.383 clat percentiles (msec): 00:25:26.383 | 1.00th=[ 7], 5.00th=[ 24], 10.00th=[ 42], 20.00th=[ 67], 00:25:26.383 | 30.00th=[ 85], 40.00th=[ 90], 50.00th=[ 107], 60.00th=[ 131], 00:25:26.383 | 70.00th=[ 157], 80.00th=[ 176], 90.00th=[ 201], 95.00th=[ 224], 00:25:26.383 | 99.00th=[ 249], 99.50th=[ 271], 99.90th=[ 363], 99.95th=[ 376], 00:25:26.383 | 99.99th=[ 388] 00:25:26.383 bw ( KiB/s): min=69632, max=259584, per=9.35%, avg=137543.35, stdev=52062.25, samples=20 00:25:26.383 iops : min= 272, max= 1014, avg=537.25, stdev=203.34, samples=20 00:25:26.383 lat (msec) : 4=0.35%, 10=1.34%, 20=1.95%, 50=10.06%, 100=33.22% 00:25:26.383 lat (msec) : 250=52.29%, 500=0.79% 00:25:26.383 cpu : usr=1.59%, sys=1.86%, ctx=3039, majf=0, minf=1 00:25:26.383 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:26.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:26.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:26.383 issued rwts: total=0,5437,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:26.383 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:26.383 job2: (groupid=0, jobs=1): err= 0: pid=1887877: Fri Jul 26 01:07:55 2024 00:25:26.383 write: IOPS=635, BW=159MiB/s (167MB/s)(1609MiB/10129msec); 0 zone resets 00:25:26.383 slat (usec): min=16, max=90378, avg=981.26, stdev=3228.87 00:25:26.383 clat (msec): min=2, max=361, avg=99.72, stdev=72.93 00:25:26.383 lat (msec): min=2, max=363, avg=100.70, stdev=73.79 00:25:26.383 clat percentiles (msec): 00:25:26.383 | 1.00th=[ 9], 5.00th=[ 20], 10.00th=[ 32], 20.00th=[ 40], 00:25:26.383 | 30.00th=[ 47], 40.00th=[ 58], 50.00th=[ 79], 60.00th=[ 95], 00:25:26.383 | 70.00th=[ 127], 80.00th=[ 159], 90.00th=[ 207], 95.00th=[ 259], 00:25:26.383 | 99.00th=[ 317], 99.50th=[ 338], 99.90th=[ 347], 99.95th=[ 355], 00:25:26.383 | 99.99th=[ 363] 00:25:26.383 bw ( KiB/s): min=53248, max=331776, per=11.09%, avg=163106.50, stdev=87984.16, samples=20 00:25:26.383 iops : min= 208, max= 1296, avg=637.10, stdev=343.73, samples=20 00:25:26.383 lat (msec) : 4=0.12%, 10=1.26%, 20=3.76%, 50=26.90%, 100=31.90% 00:25:26.383 lat (msec) : 250=30.51%, 500=5.55% 00:25:26.383 cpu : usr=2.02%, sys=2.00%, ctx=3828, majf=0, minf=1 00:25:26.383 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:25:26.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:26.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:26.383 issued rwts: total=0,6435,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:26.383 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:26.383 job3: (groupid=0, jobs=1): err= 0: pid=1887878: Fri Jul 26 01:07:55 2024 00:25:26.383 write: IOPS=473, BW=118MiB/s (124MB/s)(1204MiB/10179msec); 0 zone resets 00:25:26.383 slat (usec): min=16, max=47904, avg=1318.73, stdev=3570.66 00:25:26.383 clat (msec): min=3, max=407, avg=133.91, stdev=61.90 00:25:26.383 lat (msec): min=3, max=407, avg=135.22, stdev=62.73 00:25:26.383 clat percentiles (msec): 00:25:26.383 | 1.00th=[ 12], 5.00th=[ 30], 10.00th=[ 53], 20.00th=[ 86], 00:25:26.383 | 30.00th=[ 101], 40.00th=[ 108], 50.00th=[ 128], 60.00th=[ 153], 00:25:26.383 | 70.00th=[ 171], 80.00th=[ 188], 90.00th=[ 211], 95.00th=[ 230], 00:25:26.383 | 99.00th=[ 288], 99.50th=[ 330], 99.90th=[ 397], 99.95th=[ 397], 00:25:26.383 | 99.99th=[ 409] 00:25:26.383 bw ( KiB/s): min=66560, max=197120, per=8.27%, avg=121620.20, stdev=38123.04, samples=20 00:25:26.383 iops : min= 260, max= 770, avg=475.00, stdev=148.85, samples=20 00:25:26.383 lat (msec) : 4=0.04%, 10=0.81%, 20=2.24%, 50=6.27%, 100=20.00% 00:25:26.383 lat (msec) : 250=68.29%, 500=2.35% 00:25:26.383 cpu : usr=1.35%, sys=1.66%, ctx=2857, majf=0, minf=1 00:25:26.383 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:26.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:26.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:26.383 issued rwts: total=0,4815,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:26.383 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:26.383 job4: (groupid=0, jobs=1): err= 0: pid=1887879: Fri Jul 26 01:07:55 2024 00:25:26.383 write: IOPS=649, BW=162MiB/s (170MB/s)(1634MiB/10062msec); 0 zone resets 00:25:26.383 slat (usec): min=18, max=90979, avg=1280.70, stdev=3249.89 00:25:26.383 clat (msec): min=2, max=327, avg=97.19, stdev=56.99 00:25:26.383 lat (msec): min=2, max=327, avg=98.47, stdev=57.77 00:25:26.383 clat percentiles (msec): 00:25:26.383 | 1.00th=[ 9], 5.00th=[ 21], 10.00th=[ 32], 20.00th=[ 43], 00:25:26.383 | 30.00th=[ 53], 40.00th=[ 81], 50.00th=[ 92], 60.00th=[ 103], 00:25:26.383 | 70.00th=[ 123], 80.00th=[ 146], 90.00th=[ 178], 95.00th=[ 205], 00:25:26.383 | 99.00th=[ 234], 99.50th=[ 300], 99.90th=[ 321], 99.95th=[ 321], 00:25:26.383 | 99.99th=[ 330] 00:25:26.384 bw ( KiB/s): min=81920, max=400896, per=11.27%, avg=165734.40, stdev=74266.13, samples=20 00:25:26.384 iops : min= 320, max= 1566, avg=647.40, stdev=290.10, samples=20 00:25:26.384 lat (msec) : 4=0.17%, 10=1.39%, 20=3.30%, 50=23.36%, 100=29.63% 00:25:26.384 lat (msec) : 250=41.32%, 500=0.83% 00:25:26.384 cpu : usr=1.98%, sys=1.87%, ctx=2937, majf=0, minf=1 00:25:26.384 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:25:26.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:26.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:26.384 issued rwts: total=0,6537,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:26.384 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:26.384 job5: (groupid=0, jobs=1): err= 0: pid=1887880: Fri Jul 26 01:07:55 2024 00:25:26.384 write: IOPS=439, BW=110MiB/s (115MB/s)(1107MiB/10066msec); 0 zone resets 00:25:26.384 slat (usec): min=23, max=66533, avg=1776.62, stdev=4527.12 00:25:26.384 clat (msec): min=3, max=330, avg=143.42, stdev=69.58 00:25:26.384 lat (msec): min=3, max=330, avg=145.19, stdev=70.57 00:25:26.384 clat percentiles (msec): 00:25:26.384 | 1.00th=[ 10], 5.00th=[ 32], 10.00th=[ 53], 20.00th=[ 70], 00:25:26.384 | 30.00th=[ 97], 40.00th=[ 132], 50.00th=[ 157], 60.00th=[ 169], 00:25:26.384 | 70.00th=[ 178], 80.00th=[ 197], 90.00th=[ 228], 95.00th=[ 271], 00:25:26.384 | 99.00th=[ 305], 99.50th=[ 309], 99.90th=[ 330], 99.95th=[ 330], 00:25:26.384 | 99.99th=[ 330] 00:25:26.384 bw ( KiB/s): min=53248, max=189440, per=7.60%, avg=111724.50, stdev=36018.37, samples=20 00:25:26.384 iops : min= 208, max= 740, avg=436.35, stdev=140.70, samples=20 00:25:26.384 lat (msec) : 4=0.05%, 10=0.99%, 20=1.58%, 50=7.00%, 100=21.09% 00:25:26.384 lat (msec) : 250=62.40%, 500=6.89% 00:25:26.384 cpu : usr=1.30%, sys=1.42%, ctx=2251, majf=0, minf=1 00:25:26.384 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:26.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:26.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:26.384 issued rwts: total=0,4428,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:26.384 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:26.384 job6: (groupid=0, jobs=1): err= 0: pid=1887881: Fri Jul 26 01:07:55 2024 00:25:26.384 write: IOPS=523, BW=131MiB/s (137MB/s)(1335MiB/10194msec); 0 zone resets 00:25:26.384 slat (usec): min=15, max=68931, avg=1441.18, stdev=3928.08 00:25:26.384 clat (msec): min=2, max=443, avg=120.64, stdev=68.65 00:25:26.384 lat (msec): min=2, max=443, avg=122.08, stdev=69.43 00:25:26.384 clat percentiles (msec): 00:25:26.384 | 1.00th=[ 13], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 71], 00:25:26.384 | 30.00th=[ 78], 40.00th=[ 85], 50.00th=[ 93], 60.00th=[ 127], 00:25:26.384 | 70.00th=[ 153], 80.00th=[ 184], 90.00th=[ 215], 95.00th=[ 249], 00:25:26.384 | 99.00th=[ 305], 99.50th=[ 351], 99.90th=[ 430], 99.95th=[ 430], 00:25:26.384 | 99.99th=[ 443] 00:25:26.384 bw ( KiB/s): min=57344, max=235008, per=9.18%, avg=135084.05, stdev=55834.76, samples=20 00:25:26.384 iops : min= 224, max= 918, avg=527.65, stdev=218.08, samples=20 00:25:26.384 lat (msec) : 4=0.11%, 10=0.51%, 20=1.57%, 50=9.42%, 100=41.27% 00:25:26.384 lat (msec) : 250=42.18%, 500=4.94% 00:25:26.384 cpu : usr=1.68%, sys=1.72%, ctx=2510, majf=0, minf=1 00:25:26.384 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:26.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:26.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:26.384 issued rwts: total=0,5341,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:26.384 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:26.384 job7: (groupid=0, jobs=1): err= 0: pid=1887882: Fri Jul 26 01:07:55 2024 00:25:26.384 write: IOPS=426, BW=107MiB/s (112MB/s)(1080MiB/10129msec); 0 zone resets 00:25:26.384 slat (usec): min=26, max=189183, avg=2154.84, stdev=5257.00 00:25:26.384 clat (msec): min=14, max=320, avg=147.78, stdev=60.31 00:25:26.384 lat (msec): min=14, max=320, avg=149.94, stdev=61.04 00:25:26.384 clat percentiles (msec): 00:25:26.384 | 1.00th=[ 39], 5.00th=[ 56], 10.00th=[ 70], 20.00th=[ 89], 00:25:26.384 | 30.00th=[ 115], 40.00th=[ 134], 50.00th=[ 148], 60.00th=[ 161], 00:25:26.384 | 70.00th=[ 174], 80.00th=[ 192], 90.00th=[ 230], 95.00th=[ 264], 00:25:26.384 | 99.00th=[ 305], 99.50th=[ 317], 99.90th=[ 321], 99.95th=[ 321], 00:25:26.384 | 99.99th=[ 321] 00:25:26.384 bw ( KiB/s): min=63361, max=235520, per=7.41%, avg=108963.40, stdev=39518.57, samples=20 00:25:26.384 iops : min= 247, max= 920, avg=425.60, stdev=154.41, samples=20 00:25:26.384 lat (msec) : 20=0.16%, 50=3.15%, 100=21.64%, 250=68.10%, 500=6.94% 00:25:26.384 cpu : usr=1.31%, sys=1.25%, ctx=1402, majf=0, minf=1 00:25:26.384 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:25:26.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:26.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:26.384 issued rwts: total=0,4320,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:26.384 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:26.384 job8: (groupid=0, jobs=1): err= 0: pid=1887904: Fri Jul 26 01:07:55 2024 00:25:26.384 write: IOPS=625, BW=156MiB/s (164MB/s)(1584MiB/10129msec); 0 zone resets 00:25:26.384 slat (usec): min=17, max=74097, avg=1190.06, stdev=3389.05 00:25:26.384 clat (usec): min=1119, max=272304, avg=101015.62, stdev=62539.17 00:25:26.384 lat (usec): min=1162, max=272342, avg=102205.67, stdev=63321.87 00:25:26.384 clat percentiles (msec): 00:25:26.384 | 1.00th=[ 6], 5.00th=[ 15], 10.00th=[ 32], 20.00th=[ 43], 00:25:26.384 | 30.00th=[ 52], 40.00th=[ 74], 50.00th=[ 89], 60.00th=[ 109], 00:25:26.384 | 70.00th=[ 134], 80.00th=[ 157], 90.00th=[ 197], 95.00th=[ 222], 00:25:26.384 | 99.00th=[ 243], 99.50th=[ 249], 99.90th=[ 259], 99.95th=[ 271], 00:25:26.384 | 99.99th=[ 271] 00:25:26.384 bw ( KiB/s): min=71680, max=315392, per=10.92%, avg=160583.00, stdev=63625.87, samples=20 00:25:26.384 iops : min= 280, max= 1232, avg=627.25, stdev=248.53, samples=20 00:25:26.384 lat (msec) : 2=0.17%, 4=0.55%, 10=2.41%, 20=3.66%, 50=21.79% 00:25:26.384 lat (msec) : 100=28.91%, 250=42.02%, 500=0.47% 00:25:26.384 cpu : usr=1.96%, sys=1.98%, ctx=3173, majf=0, minf=1 00:25:26.384 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:26.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:26.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:26.384 issued rwts: total=0,6337,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:26.384 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:26.384 job9: (groupid=0, jobs=1): err= 0: pid=1887918: Fri Jul 26 01:07:55 2024 00:25:26.384 write: IOPS=545, BW=136MiB/s (143MB/s)(1391MiB/10194msec); 0 zone resets 00:25:26.384 slat (usec): min=19, max=67990, avg=1433.44, stdev=3812.09 00:25:26.384 clat (msec): min=3, max=459, avg=115.80, stdev=78.27 00:25:26.384 lat (msec): min=3, max=459, avg=117.24, stdev=79.23 00:25:26.384 clat percentiles (msec): 00:25:26.384 | 1.00th=[ 16], 5.00th=[ 39], 10.00th=[ 41], 20.00th=[ 41], 00:25:26.384 | 30.00th=[ 42], 40.00th=[ 58], 50.00th=[ 107], 60.00th=[ 140], 00:25:26.384 | 70.00th=[ 163], 80.00th=[ 184], 90.00th=[ 224], 95.00th=[ 259], 00:25:26.384 | 99.00th=[ 305], 99.50th=[ 368], 99.90th=[ 447], 99.95th=[ 447], 00:25:26.384 | 99.99th=[ 460] 00:25:26.384 bw ( KiB/s): min=53760, max=396800, per=9.57%, avg=140757.55, stdev=98127.89, samples=20 00:25:26.384 iops : min= 210, max= 1550, avg=549.80, stdev=383.34, samples=20 00:25:26.384 lat (msec) : 4=0.05%, 10=0.45%, 20=0.95%, 50=36.39%, 100=11.04% 00:25:26.384 lat (msec) : 250=44.91%, 500=6.20% 00:25:26.384 cpu : usr=1.69%, sys=1.76%, ctx=2291, majf=0, minf=1 00:25:26.384 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:26.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:26.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:26.384 issued rwts: total=0,5562,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:26.384 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:26.384 job10: (groupid=0, jobs=1): err= 0: pid=1887933: Fri Jul 26 01:07:55 2024 00:25:26.384 write: IOPS=361, BW=90.5MiB/s (94.9MB/s)(923MiB/10198msec); 0 zone resets 00:25:26.384 slat (usec): min=26, max=89976, avg=2527.90, stdev=5482.69 00:25:26.384 clat (msec): min=3, max=411, avg=174.18, stdev=64.98 00:25:26.384 lat (msec): min=3, max=411, avg=176.71, stdev=65.80 00:25:26.384 clat percentiles (msec): 00:25:26.384 | 1.00th=[ 16], 5.00th=[ 50], 10.00th=[ 94], 20.00th=[ 122], 00:25:26.384 | 30.00th=[ 138], 40.00th=[ 167], 50.00th=[ 182], 60.00th=[ 197], 00:25:26.384 | 70.00th=[ 209], 80.00th=[ 230], 90.00th=[ 253], 95.00th=[ 268], 00:25:26.384 | 99.00th=[ 305], 99.50th=[ 334], 99.90th=[ 401], 99.95th=[ 414], 00:25:26.384 | 99.99th=[ 414] 00:25:26.384 bw ( KiB/s): min=59273, max=136192, per=6.31%, avg=92835.80, stdev=25674.04, samples=20 00:25:26.384 iops : min= 231, max= 532, avg=362.60, stdev=100.33, samples=20 00:25:26.384 lat (msec) : 4=0.03%, 10=0.24%, 20=1.25%, 50=3.60%, 100=9.05% 00:25:26.384 lat (msec) : 250=74.47%, 500=11.36% 00:25:26.384 cpu : usr=1.04%, sys=1.14%, ctx=1272, majf=0, minf=1 00:25:26.384 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:25:26.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:26.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:26.384 issued rwts: total=0,3690,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:26.384 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:26.384 00:25:26.384 Run status group 0 (all jobs): 00:25:26.384 WRITE: bw=1436MiB/s (1506MB/s), 90.5MiB/s-162MiB/s (94.9MB/s-170MB/s), io=14.3GiB (15.4GB), run=10062-10198msec 00:25:26.384 00:25:26.384 Disk stats (read/write): 00:25:26.384 nvme0n1: ios=49/11343, merge=0/0, ticks=39/1241543, in_queue=1241582, util=97.20% 00:25:26.384 nvme10n1: ios=36/10633, merge=0/0, ticks=566/1212307, in_queue=1212873, util=100.00% 00:25:26.384 nvme1n1: ios=48/12678, merge=0/0, ticks=162/1222189, in_queue=1222351, util=98.54% 00:25:26.384 nvme2n1: ios=46/9619, merge=0/0, ticks=239/1249095, in_queue=1249334, util=99.73% 00:25:26.384 nvme3n1: ios=0/12677, merge=0/0, ticks=0/1216914, in_queue=1216914, util=97.66% 00:25:26.384 nvme4n1: ios=46/8578, merge=0/0, ticks=1558/1217142, in_queue=1218700, util=99.98% 00:25:26.384 nvme5n1: ios=0/10657, merge=0/0, ticks=0/1241251, in_queue=1241251, util=98.21% 00:25:26.384 nvme6n1: ios=48/8449, merge=0/0, ticks=3202/1190244, in_queue=1193446, util=99.98% 00:25:26.384 nvme7n1: ios=39/12484, merge=0/0, ticks=1658/1214963, in_queue=1216621, util=99.98% 00:25:26.385 nvme8n1: ios=0/11098, merge=0/0, ticks=0/1240473, in_queue=1240473, util=98.97% 00:25:26.385 nvme9n1: ios=42/7347, merge=0/0, ticks=1385/1231414, in_queue=1232799, util=99.98% 00:25:26.385 01:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:26.385 01:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:26.385 01:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:26.385 01:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:26.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:26.385 01:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:26.385 01:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:26.385 01:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:26.385 01:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:25:26.385 01:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:26.385 01:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:25:26.385 01:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:26.385 01:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:26.385 01:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.385 01:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:26.385 01:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.385 01:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:26.385 01:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:26.385 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:26.385 01:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:26.385 01:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:26.385 01:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:26.385 01:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:25:26.385 01:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:26.385 01:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:25:26.385 01:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:26.385 01:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:26.385 01:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.385 01:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:26.385 01:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.385 01:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:26.385 01:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:26.643 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:26.643 01:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:26.643 01:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:26.643 01:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:26.643 01:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:25:26.643 01:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:26.643 01:07:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:25:26.643 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:26.643 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:26.643 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.643 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:26.643 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.643 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:26.643 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:26.901 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:26.901 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:26.901 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:26.901 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:26.901 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:25:26.901 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:26.901 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:25:26.901 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:26.901 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:26.901 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.901 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:26.901 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.901 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:26.901 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:26.901 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:26.901 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:26.901 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:26.901 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:26.901 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:25:26.901 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:26.901 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:25:26.901 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:26.901 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:26.901 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.901 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:27.160 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.160 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:27.160 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:27.160 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:27.160 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:27.160 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:27.160 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:27.160 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:25:27.160 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:27.160 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:25:27.160 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:27.160 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:27.160 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.160 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:27.160 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.160 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:27.160 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:27.420 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:27.420 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:27.420 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:27.420 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:27.420 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:25:27.420 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:27.420 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:25:27.420 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:27.420 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:27.420 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.420 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:27.420 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.420 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:27.420 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:27.680 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:27.680 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:27.680 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:27.680 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:27.680 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:25:27.680 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:27.680 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:25:27.680 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:27.680 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:27.680 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.680 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:27.680 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.680 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:27.680 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:27.680 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:27.680 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:27.680 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:27.680 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:27.680 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:25:27.680 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:27.680 01:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:25:27.680 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:27.680 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:27.680 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.680 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:27.680 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.680 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:27.680 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:27.941 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:27.941 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:27.941 rmmod nvme_tcp 00:25:27.941 rmmod nvme_fabrics 00:25:27.941 rmmod nvme_keyring 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 1882547 ']' 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 1882547 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 1882547 ']' 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 1882547 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1882547 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1882547' 00:25:27.941 killing process with pid 1882547 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 1882547 00:25:27.941 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 1882547 00:25:28.506 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:28.506 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:28.506 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:28.506 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:28.506 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:28.506 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.506 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:28.506 01:07:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.445 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:30.704 00:25:30.704 real 1m0.573s 00:25:30.704 user 3m21.940s 00:25:30.704 sys 0m25.469s 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.704 ************************************ 00:25:30.704 END TEST nvmf_multiconnection 00:25:30.704 ************************************ 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:30.704 ************************************ 00:25:30.704 START TEST nvmf_initiator_timeout 00:25:30.704 ************************************ 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:30.704 * Looking for test storage... 00:25:30.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:25:30.704 01:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:32.608 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:32.608 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:25:32.608 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:32.608 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:32.608 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:32.608 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:32.608 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:32.608 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:25:32.608 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:32.608 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:25:32.608 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:25:32.608 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:25:32.608 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:25:32.608 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:25:32.608 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:25:32.608 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:32.608 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:32.608 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:32.608 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:32.608 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:32.608 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:32.608 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:32.608 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:32.608 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:32.608 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:32.608 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:32.608 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:32.608 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:32.608 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:32.608 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:32.608 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:32.608 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:32.608 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:32.608 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:32.608 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:32.608 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:32.608 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:32.608 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.608 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.608 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:32.608 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:32.608 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:32.609 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:32.609 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:32.609 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:32.609 01:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:32.609 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:32.609 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:32.609 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:32.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:32.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:25:32.609 00:25:32.609 --- 10.0.0.2 ping statistics --- 00:25:32.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.609 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:25:32.609 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:32.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:32.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:25:32.609 00:25:32.609 --- 10.0.0.1 ping statistics --- 00:25:32.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.609 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:25:32.609 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:32.609 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:25:32.609 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:32.609 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:32.609 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:32.609 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:32.609 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:32.609 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:32.609 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:32.868 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:32.868 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:32.868 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:32.868 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:32.868 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=1891221 00:25:32.868 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:32.868 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 1891221 00:25:32.868 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 1891221 ']' 00:25:32.868 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:32.868 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:32.868 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:32.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:32.868 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:32.868 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:32.868 [2024-07-26 01:08:03.097798] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:25:32.868 [2024-07-26 01:08:03.097884] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:32.868 EAL: No free 2048 kB hugepages reported on node 1 00:25:32.868 [2024-07-26 01:08:03.163542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:32.868 [2024-07-26 01:08:03.252096] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:32.868 [2024-07-26 01:08:03.252163] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:32.868 [2024-07-26 01:08:03.252177] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:32.868 [2024-07-26 01:08:03.252189] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:32.868 [2024-07-26 01:08:03.252199] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:32.868 [2024-07-26 01:08:03.252282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:32.868 [2024-07-26 01:08:03.252355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:32.868 [2024-07-26 01:08:03.252400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:32.868 [2024-07-26 01:08:03.252402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.127 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:33.127 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:25:33.127 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:33.127 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:33.127 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:33.127 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:33.127 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:33.127 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:33.127 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.127 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:33.127 Malloc0 00:25:33.127 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.127 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:33.127 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.127 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:33.127 Delay0 00:25:33.127 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.127 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:33.127 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.127 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:33.127 [2024-07-26 01:08:03.437212] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:33.127 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.127 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:33.127 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.127 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:33.127 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.127 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:33.127 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.127 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:33.127 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.127 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:33.127 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.127 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:33.127 [2024-07-26 01:08:03.465522] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:33.127 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.128 01:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:34.064 01:08:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:34.064 01:08:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:25:34.064 01:08:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:34.064 01:08:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:34.064 01:08:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:25:35.968 01:08:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:35.968 01:08:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:35.968 01:08:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:25:35.968 01:08:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:35.968 01:08:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:35.968 01:08:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:25:35.968 01:08:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1891648 00:25:35.968 01:08:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:35.968 01:08:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:35.968 [global] 00:25:35.968 thread=1 00:25:35.968 invalidate=1 00:25:35.968 rw=write 00:25:35.968 time_based=1 00:25:35.968 runtime=60 00:25:35.968 ioengine=libaio 00:25:35.968 direct=1 00:25:35.968 bs=4096 00:25:35.968 iodepth=1 00:25:35.968 norandommap=0 00:25:35.968 numjobs=1 00:25:35.968 00:25:35.968 verify_dump=1 00:25:35.968 verify_backlog=512 00:25:35.968 verify_state_save=0 00:25:35.968 do_verify=1 00:25:35.968 verify=crc32c-intel 00:25:35.968 [job0] 00:25:35.968 filename=/dev/nvme0n1 00:25:35.968 Could not set queue depth (nvme0n1) 00:25:35.968 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:35.968 fio-3.35 00:25:35.968 Starting 1 thread 00:25:39.253 01:08:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:39.253 01:08:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.253 01:08:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:39.253 true 00:25:39.253 01:08:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.253 01:08:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:39.253 01:08:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.253 01:08:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:39.253 true 00:25:39.253 01:08:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.253 01:08:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:39.253 01:08:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.253 01:08:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:39.253 true 00:25:39.253 01:08:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.253 01:08:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:39.253 01:08:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.253 01:08:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:39.253 true 00:25:39.253 01:08:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.253 01:08:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:41.786 01:08:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:41.786 01:08:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.786 01:08:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:42.045 true 00:25:42.045 01:08:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.045 01:08:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:42.045 01:08:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.045 01:08:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:42.045 true 00:25:42.045 01:08:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.045 01:08:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:42.045 01:08:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.045 01:08:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:42.045 true 00:25:42.045 01:08:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.045 01:08:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:42.045 01:08:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.045 01:08:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:42.045 true 00:25:42.045 01:08:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.045 01:08:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:42.045 01:08:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1891648 00:26:38.269 00:26:38.269 job0: (groupid=0, jobs=1): err= 0: pid=1891725: Fri Jul 26 01:09:06 2024 00:26:38.269 read: IOPS=18, BW=72.7KiB/s (74.4kB/s)(4360KiB/60001msec) 00:26:38.269 slat (usec): min=6, max=10868, avg=29.66, stdev=328.75 00:26:38.269 clat (usec): min=292, max=41245k, avg=54683.31, stdev=1248935.51 00:26:38.269 lat (usec): min=299, max=41245k, avg=54712.97, stdev=1248935.41 00:26:38.269 clat percentiles (usec): 00:26:38.269 | 1.00th=[ 302], 5.00th=[ 314], 10.00th=[ 326], 00:26:38.269 | 20.00th=[ 351], 30.00th=[ 457], 40.00th=[ 486], 00:26:38.269 | 50.00th=[ 510], 60.00th=[ 40633], 70.00th=[ 41157], 00:26:38.269 | 80.00th=[ 41157], 90.00th=[ 41157], 95.00th=[ 41157], 00:26:38.269 | 99.00th=[ 41157], 99.50th=[ 42206], 99.90th=[ 44827], 00:26:38.269 | 99.95th=[17112761], 99.99th=[17112761] 00:26:38.269 write: IOPS=25, BW=102KiB/s (105kB/s)(6144KiB/60001msec); 0 zone resets 00:26:38.269 slat (nsec): min=6318, max=36213, avg=10780.80, stdev=3438.45 00:26:38.269 clat (usec): min=182, max=688, avg=218.34, stdev=24.14 00:26:38.269 lat (usec): min=190, max=705, avg=229.12, stdev=25.63 00:26:38.269 clat percentiles (usec): 00:26:38.269 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 204], 00:26:38.269 | 30.00th=[ 208], 40.00th=[ 210], 50.00th=[ 215], 60.00th=[ 219], 00:26:38.269 | 70.00th=[ 223], 80.00th=[ 229], 90.00th=[ 241], 95.00th=[ 255], 00:26:38.269 | 99.00th=[ 302], 99.50th=[ 322], 99.90th=[ 412], 99.95th=[ 693], 00:26:38.269 | 99.99th=[ 693] 00:26:38.269 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=2 00:26:38.269 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:26:38.269 lat (usec) : 250=54.87%, 500=23.19%, 750=5.10% 00:26:38.269 lat (msec) : 50=16.79%, >=2000=0.04% 00:26:38.269 cpu : usr=0.04%, sys=0.08%, ctx=2627, majf=0, minf=2 00:26:38.269 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:38.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.269 issued rwts: total=1090,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.269 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:38.269 00:26:38.269 Run status group 0 (all jobs): 00:26:38.269 READ: bw=72.7KiB/s (74.4kB/s), 72.7KiB/s-72.7KiB/s (74.4kB/s-74.4kB/s), io=4360KiB (4465kB), run=60001-60001msec 00:26:38.269 WRITE: bw=102KiB/s (105kB/s), 102KiB/s-102KiB/s (105kB/s-105kB/s), io=6144KiB (6291kB), run=60001-60001msec 00:26:38.269 00:26:38.269 Disk stats (read/write): 00:26:38.269 nvme0n1: ios=1123/1192, merge=0/0, ticks=19582/252, in_queue=19834, util=99.78% 00:26:38.269 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:38.269 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:38.269 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:38.269 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:26:38.269 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:38.269 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:38.269 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:38.269 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:38.269 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:26:38.269 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:38.269 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:38.269 nvmf hotplug test: fio successful as expected 00:26:38.269 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:38.269 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.269 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:38.269 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.269 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:38.269 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:38.269 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:38.269 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:38.269 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:26:38.269 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:38.269 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:26:38.269 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:38.269 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:38.269 rmmod nvme_tcp 00:26:38.269 rmmod nvme_fabrics 00:26:38.269 rmmod nvme_keyring 00:26:38.269 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:38.270 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:26:38.270 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:26:38.270 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 1891221 ']' 00:26:38.270 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 1891221 00:26:38.270 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 1891221 ']' 00:26:38.270 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 1891221 00:26:38.270 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:26:38.270 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:38.270 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1891221 00:26:38.270 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:38.270 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:38.270 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1891221' 00:26:38.270 killing process with pid 1891221 00:26:38.270 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 1891221 00:26:38.270 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 1891221 00:26:38.270 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:38.270 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:38.270 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:38.270 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:38.270 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:38.270 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.270 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:38.270 01:09:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:38.837 01:09:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:38.837 00:26:38.837 real 1m8.057s 00:26:38.837 user 4m10.869s 00:26:38.837 sys 0m6.287s 00:26:38.837 01:09:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:38.837 01:09:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:38.837 ************************************ 00:26:38.837 END TEST nvmf_initiator_timeout 00:26:38.837 ************************************ 00:26:38.837 01:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:26:38.837 01:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:26:38.837 01:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:26:38.837 01:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:26:38.837 01:09:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:40.763 01:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:40.764 01:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:26:40.764 01:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:40.764 01:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:40.764 01:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:40.764 01:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:40.764 01:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:40.764 01:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:26:40.764 01:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:40.764 01:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:26:40.764 01:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:26:40.764 01:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:26:40.764 01:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:26:40.764 01:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:26:40.764 01:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:26:40.764 01:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:40.764 01:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:40.764 01:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:40.764 01:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:40.764 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:40.764 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:40.764 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:40.764 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:40.764 ************************************ 00:26:40.764 START TEST nvmf_perf_adq 00:26:40.764 ************************************ 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:40.764 * Looking for test storage... 00:26:40.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.764 01:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.765 01:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.765 01:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:26:40.765 01:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.765 01:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:26:40.765 01:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:40.765 01:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:40.765 01:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:40.765 01:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:40.765 01:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:40.765 01:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:40.765 01:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:40.765 01:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:40.765 01:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:40.765 01:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:40.765 01:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:42.673 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:42.673 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:42.673 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:42.673 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:26:42.673 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:43.610 01:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:45.513 01:09:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:50.782 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:50.782 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:50.782 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:50.782 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:50.782 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:50.783 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:50.783 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:50.783 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:50.783 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:26:50.783 00:26:50.783 --- 10.0.0.2 ping statistics --- 00:26:50.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.783 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:26:50.783 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:50.783 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:50.783 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:26:50.783 00:26:50.783 --- 10.0.0.1 ping statistics --- 00:26:50.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.783 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:26:50.783 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:50.783 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:50.783 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:50.783 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:50.783 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:50.783 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:50.783 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:50.783 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:50.783 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:50.783 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:50.783 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:50.783 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:50.783 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:50.783 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1903846 00:26:50.783 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:50.783 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1903846 00:26:50.783 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1903846 ']' 00:26:50.783 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:50.783 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:50.783 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:50.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:50.783 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:50.783 01:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:50.783 [2024-07-26 01:09:20.961767] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:26:50.783 [2024-07-26 01:09:20.961841] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:50.783 EAL: No free 2048 kB hugepages reported on node 1 00:26:50.783 [2024-07-26 01:09:21.025069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:50.783 [2024-07-26 01:09:21.110366] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:50.783 [2024-07-26 01:09:21.110420] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:50.783 [2024-07-26 01:09:21.110443] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:50.783 [2024-07-26 01:09:21.110454] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:50.783 [2024-07-26 01:09:21.110464] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:50.783 [2024-07-26 01:09:21.110520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.783 [2024-07-26 01:09:21.110578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:50.783 [2024-07-26 01:09:21.110645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:50.783 [2024-07-26 01:09:21.110647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.783 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:50.783 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:26:50.783 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:50.783 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:50.783 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:50.783 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:50.783 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:26:50.783 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:50.783 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:50.783 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.783 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:51.042 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.042 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:51.042 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:26:51.042 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.042 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:51.042 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.042 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:51.042 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.042 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:51.042 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.042 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:26:51.042 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.042 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:51.042 [2024-07-26 01:09:21.359686] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:51.042 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.042 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:51.042 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.042 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:51.042 Malloc1 00:26:51.042 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.042 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:51.042 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.042 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:51.042 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.042 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:51.042 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.042 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:51.042 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.042 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:51.042 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.042 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:51.042 [2024-07-26 01:09:21.412823] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:51.042 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.042 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1903995 00:26:51.042 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:26:51.042 01:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:51.042 EAL: No free 2048 kB hugepages reported on node 1 00:26:53.573 01:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:26:53.573 01:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.573 01:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:53.573 01:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.573 01:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:26:53.573 "tick_rate": 2700000000, 00:26:53.573 "poll_groups": [ 00:26:53.573 { 00:26:53.573 "name": "nvmf_tgt_poll_group_000", 00:26:53.573 "admin_qpairs": 1, 00:26:53.573 "io_qpairs": 1, 00:26:53.573 "current_admin_qpairs": 1, 00:26:53.573 "current_io_qpairs": 1, 00:26:53.573 "pending_bdev_io": 0, 00:26:53.573 "completed_nvme_io": 20404, 00:26:53.573 "transports": [ 00:26:53.573 { 00:26:53.573 "trtype": "TCP" 00:26:53.573 } 00:26:53.573 ] 00:26:53.573 }, 00:26:53.573 { 00:26:53.573 "name": "nvmf_tgt_poll_group_001", 00:26:53.573 "admin_qpairs": 0, 00:26:53.573 "io_qpairs": 1, 00:26:53.573 "current_admin_qpairs": 0, 00:26:53.573 "current_io_qpairs": 1, 00:26:53.573 "pending_bdev_io": 0, 00:26:53.573 "completed_nvme_io": 20190, 00:26:53.573 "transports": [ 00:26:53.573 { 00:26:53.573 "trtype": "TCP" 00:26:53.573 } 00:26:53.573 ] 00:26:53.573 }, 00:26:53.573 { 00:26:53.573 "name": "nvmf_tgt_poll_group_002", 00:26:53.573 "admin_qpairs": 0, 00:26:53.573 "io_qpairs": 1, 00:26:53.573 "current_admin_qpairs": 0, 00:26:53.573 "current_io_qpairs": 1, 00:26:53.573 "pending_bdev_io": 0, 00:26:53.573 "completed_nvme_io": 20425, 00:26:53.573 "transports": [ 00:26:53.573 { 00:26:53.573 "trtype": "TCP" 00:26:53.573 } 00:26:53.573 ] 00:26:53.573 }, 00:26:53.573 { 00:26:53.573 "name": "nvmf_tgt_poll_group_003", 00:26:53.573 "admin_qpairs": 0, 00:26:53.573 "io_qpairs": 1, 00:26:53.573 "current_admin_qpairs": 0, 00:26:53.573 "current_io_qpairs": 1, 00:26:53.573 "pending_bdev_io": 0, 00:26:53.573 "completed_nvme_io": 19893, 00:26:53.573 "transports": [ 00:26:53.573 { 00:26:53.573 "trtype": "TCP" 00:26:53.573 } 00:26:53.573 ] 00:26:53.573 } 00:26:53.573 ] 00:26:53.573 }' 00:26:53.573 01:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:26:53.573 01:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:26:53.573 01:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:26:53.573 01:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:26:53.573 01:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1903995 00:27:01.682 Initializing NVMe Controllers 00:27:01.682 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:01.682 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:01.682 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:01.682 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:01.682 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:01.682 Initialization complete. Launching workers. 00:27:01.682 ======================================================== 00:27:01.682 Latency(us) 00:27:01.682 Device Information : IOPS MiB/s Average min max 00:27:01.682 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10327.70 40.34 6197.06 2615.72 10472.54 00:27:01.682 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10556.00 41.23 6062.94 2504.00 9375.28 00:27:01.682 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10711.70 41.84 5974.35 1721.54 10184.80 00:27:01.682 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10734.60 41.93 5963.08 2325.55 10879.21 00:27:01.682 ======================================================== 00:27:01.682 Total : 42330.00 165.35 6047.92 1721.54 10879.21 00:27:01.682 00:27:01.682 01:09:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:27:01.682 01:09:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:01.682 01:09:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:01.682 01:09:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:01.682 01:09:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:01.682 01:09:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:01.682 01:09:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:01.682 rmmod nvme_tcp 00:27:01.682 rmmod nvme_fabrics 00:27:01.682 rmmod nvme_keyring 00:27:01.682 01:09:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:01.682 01:09:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:01.682 01:09:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:01.682 01:09:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1903846 ']' 00:27:01.682 01:09:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1903846 00:27:01.682 01:09:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1903846 ']' 00:27:01.682 01:09:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1903846 00:27:01.682 01:09:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:27:01.682 01:09:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:01.682 01:09:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1903846 00:27:01.682 01:09:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:01.682 01:09:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:01.682 01:09:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1903846' 00:27:01.682 killing process with pid 1903846 00:27:01.682 01:09:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1903846 00:27:01.682 01:09:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1903846 00:27:01.682 01:09:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:01.683 01:09:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:01.683 01:09:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:01.683 01:09:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:01.683 01:09:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:01.683 01:09:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:01.683 01:09:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:01.683 01:09:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.595 01:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:03.595 01:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:27:03.595 01:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:04.529 01:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:06.430 01:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:11.724 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:11.724 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:11.724 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:11.724 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:11.725 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:11.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:11.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:27:11.725 00:27:11.725 --- 10.0.0.2 ping statistics --- 00:27:11.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:11.725 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:11.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:11.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:27:11.725 00:27:11.725 --- 10.0.0.1 ping statistics --- 00:27:11.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:11.725 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:11.725 net.core.busy_poll = 1 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:11.725 net.core.busy_read = 1 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1906607 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1906607 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1906607 ']' 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:11.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:11.725 01:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:11.725 [2024-07-26 01:09:41.975681] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:27:11.725 [2024-07-26 01:09:41.975761] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:11.725 EAL: No free 2048 kB hugepages reported on node 1 00:27:11.725 [2024-07-26 01:09:42.040268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:11.725 [2024-07-26 01:09:42.130673] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:11.725 [2024-07-26 01:09:42.130735] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:11.725 [2024-07-26 01:09:42.130748] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:11.725 [2024-07-26 01:09:42.130759] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:11.725 [2024-07-26 01:09:42.130769] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:11.725 [2024-07-26 01:09:42.130861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:11.725 [2024-07-26 01:09:42.130923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:11.725 [2024-07-26 01:09:42.130990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:11.725 [2024-07-26 01:09:42.130992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:11.983 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:11.983 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:27:11.983 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:11.983 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:11.983 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:11.983 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:11.983 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:27:11.983 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:11.983 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:11.983 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.983 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:11.983 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.983 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:11.983 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:11.983 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.983 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:11.983 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.983 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:11.983 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.983 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:11.983 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.983 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:11.983 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.983 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:11.983 [2024-07-26 01:09:42.368463] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:11.983 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.983 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:11.983 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.983 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:11.983 Malloc1 00:27:11.983 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.983 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:11.983 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.983 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:11.983 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.983 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:11.983 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.983 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:12.241 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.241 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:12.241 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.241 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:12.241 [2024-07-26 01:09:42.419908] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:12.241 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.241 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1906645 00:27:12.241 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:27:12.241 01:09:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:12.241 EAL: No free 2048 kB hugepages reported on node 1 00:27:14.145 01:09:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:27:14.145 01:09:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.145 01:09:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:14.145 01:09:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.145 01:09:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:27:14.145 "tick_rate": 2700000000, 00:27:14.145 "poll_groups": [ 00:27:14.145 { 00:27:14.145 "name": "nvmf_tgt_poll_group_000", 00:27:14.145 "admin_qpairs": 1, 00:27:14.145 "io_qpairs": 1, 00:27:14.145 "current_admin_qpairs": 1, 00:27:14.145 "current_io_qpairs": 1, 00:27:14.145 "pending_bdev_io": 0, 00:27:14.145 "completed_nvme_io": 24399, 00:27:14.145 "transports": [ 00:27:14.145 { 00:27:14.145 "trtype": "TCP" 00:27:14.145 } 00:27:14.145 ] 00:27:14.145 }, 00:27:14.145 { 00:27:14.145 "name": "nvmf_tgt_poll_group_001", 00:27:14.145 "admin_qpairs": 0, 00:27:14.145 "io_qpairs": 3, 00:27:14.145 "current_admin_qpairs": 0, 00:27:14.145 "current_io_qpairs": 3, 00:27:14.145 "pending_bdev_io": 0, 00:27:14.145 "completed_nvme_io": 26091, 00:27:14.145 "transports": [ 00:27:14.145 { 00:27:14.145 "trtype": "TCP" 00:27:14.145 } 00:27:14.145 ] 00:27:14.145 }, 00:27:14.145 { 00:27:14.145 "name": "nvmf_tgt_poll_group_002", 00:27:14.145 "admin_qpairs": 0, 00:27:14.145 "io_qpairs": 0, 00:27:14.145 "current_admin_qpairs": 0, 00:27:14.145 "current_io_qpairs": 0, 00:27:14.145 "pending_bdev_io": 0, 00:27:14.145 "completed_nvme_io": 0, 00:27:14.145 "transports": [ 00:27:14.145 { 00:27:14.145 "trtype": "TCP" 00:27:14.145 } 00:27:14.145 ] 00:27:14.145 }, 00:27:14.145 { 00:27:14.145 "name": "nvmf_tgt_poll_group_003", 00:27:14.145 "admin_qpairs": 0, 00:27:14.145 "io_qpairs": 0, 00:27:14.145 "current_admin_qpairs": 0, 00:27:14.145 "current_io_qpairs": 0, 00:27:14.145 "pending_bdev_io": 0, 00:27:14.145 "completed_nvme_io": 0, 00:27:14.145 "transports": [ 00:27:14.145 { 00:27:14.145 "trtype": "TCP" 00:27:14.145 } 00:27:14.145 ] 00:27:14.145 } 00:27:14.145 ] 00:27:14.145 }' 00:27:14.145 01:09:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:14.145 01:09:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:27:14.145 01:09:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:27:14.145 01:09:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:27:14.145 01:09:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1906645 00:27:22.253 Initializing NVMe Controllers 00:27:22.253 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:22.253 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:22.253 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:22.253 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:22.253 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:22.253 Initialization complete. Launching workers. 00:27:22.253 ======================================================== 00:27:22.253 Latency(us) 00:27:22.253 Device Information : IOPS MiB/s Average min max 00:27:22.253 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4549.31 17.77 14073.15 1760.74 60189.35 00:27:22.253 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5039.59 19.69 12701.87 1918.03 62122.03 00:27:22.253 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4672.81 18.25 13699.78 2154.03 60673.53 00:27:22.253 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13453.84 52.55 4756.56 1668.74 6667.26 00:27:22.253 ======================================================== 00:27:22.253 Total : 27715.55 108.26 9238.34 1668.74 62122.03 00:27:22.253 00:27:22.253 01:09:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:27:22.253 01:09:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:22.253 01:09:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:22.253 01:09:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:22.253 01:09:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:22.253 01:09:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:22.253 01:09:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:22.253 rmmod nvme_tcp 00:27:22.253 rmmod nvme_fabrics 00:27:22.253 rmmod nvme_keyring 00:27:22.512 01:09:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:22.512 01:09:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:22.512 01:09:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:22.512 01:09:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1906607 ']' 00:27:22.512 01:09:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1906607 00:27:22.512 01:09:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1906607 ']' 00:27:22.512 01:09:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1906607 00:27:22.512 01:09:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:27:22.512 01:09:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:22.512 01:09:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1906607 00:27:22.512 01:09:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:22.512 01:09:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:22.512 01:09:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1906607' 00:27:22.512 killing process with pid 1906607 00:27:22.512 01:09:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1906607 00:27:22.512 01:09:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1906607 00:27:22.771 01:09:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:22.771 01:09:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:22.771 01:09:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:22.771 01:09:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:22.771 01:09:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:22.771 01:09:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:22.771 01:09:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:22.771 01:09:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:26.059 01:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:26.059 01:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:26.059 00:27:26.059 real 0m44.961s 00:27:26.059 user 2m36.251s 00:27:26.059 sys 0m10.908s 00:27:26.059 01:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:26.059 ************************************ 00:27:26.059 END TEST nvmf_perf_adq 00:27:26.059 ************************************ 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:26.059 ************************************ 00:27:26.059 START TEST nvmf_shutdown 00:27:26.059 ************************************ 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:26.059 * Looking for test storage... 00:27:26.059 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:26.059 ************************************ 00:27:26.059 START TEST nvmf_shutdown_tc1 00:27:26.059 ************************************ 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:26.059 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:26.060 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:26.060 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:26.060 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:26.060 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:26.060 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:26.060 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:26.060 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:26.060 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:26.060 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:26.060 01:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:27.962 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:27.962 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:27.962 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:27.962 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:27.962 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:27.963 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:27.963 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:27.963 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:27.963 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:27.963 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:27.963 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:27.963 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:27.963 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:27.963 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:27.963 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:27.963 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:27.963 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:27:27.963 00:27:27.963 --- 10.0.0.2 ping statistics --- 00:27:27.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:27.963 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:27:27.963 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:27.963 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:27.963 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:27:27.963 00:27:27.963 --- 10.0.0.1 ping statistics --- 00:27:27.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:27.963 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:27:27.963 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:27.963 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:27:27.963 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:27.963 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:27.963 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:27.963 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:27.963 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:27.963 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:27.963 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:27.963 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:27.963 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:27.963 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:27.963 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:27.963 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1909928 00:27:27.963 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:27.963 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1909928 00:27:27.963 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1909928 ']' 00:27:27.963 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:27.963 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:27.963 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:27.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:27.963 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:27.963 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:27.963 [2024-07-26 01:09:58.384146] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:27:27.963 [2024-07-26 01:09:58.384240] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:28.221 EAL: No free 2048 kB hugepages reported on node 1 00:27:28.221 [2024-07-26 01:09:58.449931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:28.221 [2024-07-26 01:09:58.545556] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:28.221 [2024-07-26 01:09:58.545632] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:28.221 [2024-07-26 01:09:58.545648] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:28.221 [2024-07-26 01:09:58.545662] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:28.221 [2024-07-26 01:09:58.545674] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:28.221 [2024-07-26 01:09:58.545731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:28.221 [2024-07-26 01:09:58.545851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:28.221 [2024-07-26 01:09:58.545920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:28.221 [2024-07-26 01:09:58.545923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:28.479 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:28.479 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:27:28.479 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:28.479 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:28.479 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:28.479 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:28.479 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:28.479 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.479 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:28.479 [2024-07-26 01:09:58.689264] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:28.479 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.479 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:28.479 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:28.479 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:28.479 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:28.479 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:28.479 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:28.479 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:28.479 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:28.479 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:28.479 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:28.479 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:28.479 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:28.479 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:28.479 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:28.479 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:28.479 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:28.479 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:28.479 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:28.479 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:28.479 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:28.479 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:28.479 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:28.479 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:28.479 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:28.479 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:28.479 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:28.479 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.479 01:09:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:28.479 Malloc1 00:27:28.479 [2024-07-26 01:09:58.764237] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:28.479 Malloc2 00:27:28.479 Malloc3 00:27:28.479 Malloc4 00:27:28.737 Malloc5 00:27:28.737 Malloc6 00:27:28.737 Malloc7 00:27:28.737 Malloc8 00:27:28.737 Malloc9 00:27:28.994 Malloc10 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1910105 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1910105 /var/tmp/bdevperf.sock 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1910105 ']' 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:28.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:28.994 { 00:27:28.994 "params": { 00:27:28.994 "name": "Nvme$subsystem", 00:27:28.994 "trtype": "$TEST_TRANSPORT", 00:27:28.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:28.994 "adrfam": "ipv4", 00:27:28.994 "trsvcid": "$NVMF_PORT", 00:27:28.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:28.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:28.994 "hdgst": ${hdgst:-false}, 00:27:28.994 "ddgst": ${ddgst:-false} 00:27:28.994 }, 00:27:28.994 "method": "bdev_nvme_attach_controller" 00:27:28.994 } 00:27:28.994 EOF 00:27:28.994 )") 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:28.994 { 00:27:28.994 "params": { 00:27:28.994 "name": "Nvme$subsystem", 00:27:28.994 "trtype": "$TEST_TRANSPORT", 00:27:28.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:28.994 "adrfam": "ipv4", 00:27:28.994 "trsvcid": "$NVMF_PORT", 00:27:28.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:28.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:28.994 "hdgst": ${hdgst:-false}, 00:27:28.994 "ddgst": ${ddgst:-false} 00:27:28.994 }, 00:27:28.994 "method": "bdev_nvme_attach_controller" 00:27:28.994 } 00:27:28.994 EOF 00:27:28.994 )") 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:28.994 { 00:27:28.994 "params": { 00:27:28.994 "name": "Nvme$subsystem", 00:27:28.994 "trtype": "$TEST_TRANSPORT", 00:27:28.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:28.994 "adrfam": "ipv4", 00:27:28.994 "trsvcid": "$NVMF_PORT", 00:27:28.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:28.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:28.994 "hdgst": ${hdgst:-false}, 00:27:28.994 "ddgst": ${ddgst:-false} 00:27:28.994 }, 00:27:28.994 "method": "bdev_nvme_attach_controller" 00:27:28.994 } 00:27:28.994 EOF 00:27:28.994 )") 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:28.994 { 00:27:28.994 "params": { 00:27:28.994 "name": "Nvme$subsystem", 00:27:28.994 "trtype": "$TEST_TRANSPORT", 00:27:28.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:28.994 "adrfam": "ipv4", 00:27:28.994 "trsvcid": "$NVMF_PORT", 00:27:28.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:28.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:28.994 "hdgst": ${hdgst:-false}, 00:27:28.994 "ddgst": ${ddgst:-false} 00:27:28.994 }, 00:27:28.994 "method": "bdev_nvme_attach_controller" 00:27:28.994 } 00:27:28.994 EOF 00:27:28.994 )") 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:28.994 { 00:27:28.994 "params": { 00:27:28.994 "name": "Nvme$subsystem", 00:27:28.994 "trtype": "$TEST_TRANSPORT", 00:27:28.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:28.994 "adrfam": "ipv4", 00:27:28.994 "trsvcid": "$NVMF_PORT", 00:27:28.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:28.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:28.994 "hdgst": ${hdgst:-false}, 00:27:28.994 "ddgst": ${ddgst:-false} 00:27:28.994 }, 00:27:28.994 "method": "bdev_nvme_attach_controller" 00:27:28.994 } 00:27:28.994 EOF 00:27:28.994 )") 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:28.994 { 00:27:28.994 "params": { 00:27:28.994 "name": "Nvme$subsystem", 00:27:28.994 "trtype": "$TEST_TRANSPORT", 00:27:28.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:28.994 "adrfam": "ipv4", 00:27:28.994 "trsvcid": "$NVMF_PORT", 00:27:28.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:28.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:28.994 "hdgst": ${hdgst:-false}, 00:27:28.994 "ddgst": ${ddgst:-false} 00:27:28.994 }, 00:27:28.994 "method": "bdev_nvme_attach_controller" 00:27:28.994 } 00:27:28.994 EOF 00:27:28.994 )") 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:28.994 { 00:27:28.994 "params": { 00:27:28.994 "name": "Nvme$subsystem", 00:27:28.994 "trtype": "$TEST_TRANSPORT", 00:27:28.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:28.994 "adrfam": "ipv4", 00:27:28.994 "trsvcid": "$NVMF_PORT", 00:27:28.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:28.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:28.994 "hdgst": ${hdgst:-false}, 00:27:28.994 "ddgst": ${ddgst:-false} 00:27:28.994 }, 00:27:28.994 "method": "bdev_nvme_attach_controller" 00:27:28.994 } 00:27:28.994 EOF 00:27:28.994 )") 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:28.994 { 00:27:28.994 "params": { 00:27:28.994 "name": "Nvme$subsystem", 00:27:28.994 "trtype": "$TEST_TRANSPORT", 00:27:28.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:28.994 "adrfam": "ipv4", 00:27:28.994 "trsvcid": "$NVMF_PORT", 00:27:28.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:28.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:28.994 "hdgst": ${hdgst:-false}, 00:27:28.994 "ddgst": ${ddgst:-false} 00:27:28.994 }, 00:27:28.994 "method": "bdev_nvme_attach_controller" 00:27:28.994 } 00:27:28.994 EOF 00:27:28.994 )") 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:28.994 { 00:27:28.994 "params": { 00:27:28.994 "name": "Nvme$subsystem", 00:27:28.994 "trtype": "$TEST_TRANSPORT", 00:27:28.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:28.994 "adrfam": "ipv4", 00:27:28.994 "trsvcid": "$NVMF_PORT", 00:27:28.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:28.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:28.994 "hdgst": ${hdgst:-false}, 00:27:28.994 "ddgst": ${ddgst:-false} 00:27:28.994 }, 00:27:28.994 "method": "bdev_nvme_attach_controller" 00:27:28.994 } 00:27:28.994 EOF 00:27:28.994 )") 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:28.994 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:28.994 { 00:27:28.994 "params": { 00:27:28.994 "name": "Nvme$subsystem", 00:27:28.994 "trtype": "$TEST_TRANSPORT", 00:27:28.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:28.994 "adrfam": "ipv4", 00:27:28.994 "trsvcid": "$NVMF_PORT", 00:27:28.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:28.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:28.995 "hdgst": ${hdgst:-false}, 00:27:28.995 "ddgst": ${ddgst:-false} 00:27:28.995 }, 00:27:28.995 "method": "bdev_nvme_attach_controller" 00:27:28.995 } 00:27:28.995 EOF 00:27:28.995 )") 00:27:28.995 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:28.995 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:28.995 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:28.995 01:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:28.995 "params": { 00:27:28.995 "name": "Nvme1", 00:27:28.995 "trtype": "tcp", 00:27:28.995 "traddr": "10.0.0.2", 00:27:28.995 "adrfam": "ipv4", 00:27:28.995 "trsvcid": "4420", 00:27:28.995 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:28.995 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:28.995 "hdgst": false, 00:27:28.995 "ddgst": false 00:27:28.995 }, 00:27:28.995 "method": "bdev_nvme_attach_controller" 00:27:28.995 },{ 00:27:28.995 "params": { 00:27:28.995 "name": "Nvme2", 00:27:28.995 "trtype": "tcp", 00:27:28.995 "traddr": "10.0.0.2", 00:27:28.995 "adrfam": "ipv4", 00:27:28.995 "trsvcid": "4420", 00:27:28.995 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:28.995 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:28.995 "hdgst": false, 00:27:28.995 "ddgst": false 00:27:28.995 }, 00:27:28.995 "method": "bdev_nvme_attach_controller" 00:27:28.995 },{ 00:27:28.995 "params": { 00:27:28.995 "name": "Nvme3", 00:27:28.995 "trtype": "tcp", 00:27:28.995 "traddr": "10.0.0.2", 00:27:28.995 "adrfam": "ipv4", 00:27:28.995 "trsvcid": "4420", 00:27:28.995 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:28.995 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:28.995 "hdgst": false, 00:27:28.995 "ddgst": false 00:27:28.995 }, 00:27:28.995 "method": "bdev_nvme_attach_controller" 00:27:28.995 },{ 00:27:28.995 "params": { 00:27:28.995 "name": "Nvme4", 00:27:28.995 "trtype": "tcp", 00:27:28.995 "traddr": "10.0.0.2", 00:27:28.995 "adrfam": "ipv4", 00:27:28.995 "trsvcid": "4420", 00:27:28.995 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:28.995 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:28.995 "hdgst": false, 00:27:28.995 "ddgst": false 00:27:28.995 }, 00:27:28.995 "method": "bdev_nvme_attach_controller" 00:27:28.995 },{ 00:27:28.995 "params": { 00:27:28.995 "name": "Nvme5", 00:27:28.995 "trtype": "tcp", 00:27:28.995 "traddr": "10.0.0.2", 00:27:28.995 "adrfam": "ipv4", 00:27:28.995 "trsvcid": "4420", 00:27:28.995 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:28.995 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:28.995 "hdgst": false, 00:27:28.995 "ddgst": false 00:27:28.995 }, 00:27:28.995 "method": "bdev_nvme_attach_controller" 00:27:28.995 },{ 00:27:28.995 "params": { 00:27:28.995 "name": "Nvme6", 00:27:28.995 "trtype": "tcp", 00:27:28.995 "traddr": "10.0.0.2", 00:27:28.995 "adrfam": "ipv4", 00:27:28.995 "trsvcid": "4420", 00:27:28.995 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:28.995 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:28.995 "hdgst": false, 00:27:28.995 "ddgst": false 00:27:28.995 }, 00:27:28.995 "method": "bdev_nvme_attach_controller" 00:27:28.995 },{ 00:27:28.995 "params": { 00:27:28.995 "name": "Nvme7", 00:27:28.995 "trtype": "tcp", 00:27:28.995 "traddr": "10.0.0.2", 00:27:28.995 "adrfam": "ipv4", 00:27:28.995 "trsvcid": "4420", 00:27:28.995 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:28.995 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:28.995 "hdgst": false, 00:27:28.995 "ddgst": false 00:27:28.995 }, 00:27:28.995 "method": "bdev_nvme_attach_controller" 00:27:28.995 },{ 00:27:28.995 "params": { 00:27:28.995 "name": "Nvme8", 00:27:28.995 "trtype": "tcp", 00:27:28.995 "traddr": "10.0.0.2", 00:27:28.995 "adrfam": "ipv4", 00:27:28.995 "trsvcid": "4420", 00:27:28.995 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:28.995 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:28.995 "hdgst": false, 00:27:28.995 "ddgst": false 00:27:28.995 }, 00:27:28.995 "method": "bdev_nvme_attach_controller" 00:27:28.995 },{ 00:27:28.995 "params": { 00:27:28.995 "name": "Nvme9", 00:27:28.995 "trtype": "tcp", 00:27:28.995 "traddr": "10.0.0.2", 00:27:28.995 "adrfam": "ipv4", 00:27:28.995 "trsvcid": "4420", 00:27:28.995 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:28.995 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:28.995 "hdgst": false, 00:27:28.995 "ddgst": false 00:27:28.995 }, 00:27:28.995 "method": "bdev_nvme_attach_controller" 00:27:28.995 },{ 00:27:28.995 "params": { 00:27:28.995 "name": "Nvme10", 00:27:28.995 "trtype": "tcp", 00:27:28.995 "traddr": "10.0.0.2", 00:27:28.995 "adrfam": "ipv4", 00:27:28.995 "trsvcid": "4420", 00:27:28.995 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:28.995 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:28.995 "hdgst": false, 00:27:28.995 "ddgst": false 00:27:28.995 }, 00:27:28.995 "method": "bdev_nvme_attach_controller" 00:27:28.995 }' 00:27:28.995 [2024-07-26 01:09:59.283152] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:27:28.995 [2024-07-26 01:09:59.283229] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:28.995 EAL: No free 2048 kB hugepages reported on node 1 00:27:28.995 [2024-07-26 01:09:59.345902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.252 [2024-07-26 01:09:59.432286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:31.147 01:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:31.147 01:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:27:31.147 01:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:31.147 01:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.147 01:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:31.147 01:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.147 01:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1910105 00:27:31.147 01:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:27:31.147 01:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:27:32.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1910105 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:32.079 01:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1909928 00:27:32.079 01:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:32.079 01:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:32.079 01:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:32.079 01:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:32.079 01:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.079 01:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.079 { 00:27:32.079 "params": { 00:27:32.079 "name": "Nvme$subsystem", 00:27:32.079 "trtype": "$TEST_TRANSPORT", 00:27:32.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.079 "adrfam": "ipv4", 00:27:32.079 "trsvcid": "$NVMF_PORT", 00:27:32.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.079 "hdgst": ${hdgst:-false}, 00:27:32.079 "ddgst": ${ddgst:-false} 00:27:32.079 }, 00:27:32.079 "method": "bdev_nvme_attach_controller" 00:27:32.079 } 00:27:32.079 EOF 00:27:32.079 )") 00:27:32.079 01:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:32.079 01:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.079 01:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.079 { 00:27:32.079 "params": { 00:27:32.079 "name": "Nvme$subsystem", 00:27:32.080 "trtype": "$TEST_TRANSPORT", 00:27:32.080 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.080 "adrfam": "ipv4", 00:27:32.080 "trsvcid": "$NVMF_PORT", 00:27:32.080 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.080 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.080 "hdgst": ${hdgst:-false}, 00:27:32.080 "ddgst": ${ddgst:-false} 00:27:32.080 }, 00:27:32.080 "method": "bdev_nvme_attach_controller" 00:27:32.080 } 00:27:32.080 EOF 00:27:32.080 )") 00:27:32.080 01:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:32.080 01:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.080 01:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.080 { 00:27:32.080 "params": { 00:27:32.080 "name": "Nvme$subsystem", 00:27:32.080 "trtype": "$TEST_TRANSPORT", 00:27:32.080 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.080 "adrfam": "ipv4", 00:27:32.080 "trsvcid": "$NVMF_PORT", 00:27:32.080 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.080 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.080 "hdgst": ${hdgst:-false}, 00:27:32.080 "ddgst": ${ddgst:-false} 00:27:32.080 }, 00:27:32.080 "method": "bdev_nvme_attach_controller" 00:27:32.080 } 00:27:32.080 EOF 00:27:32.080 )") 00:27:32.080 01:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:32.080 01:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.080 01:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.080 { 00:27:32.080 "params": { 00:27:32.080 "name": "Nvme$subsystem", 00:27:32.080 "trtype": "$TEST_TRANSPORT", 00:27:32.080 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.080 "adrfam": "ipv4", 00:27:32.080 "trsvcid": "$NVMF_PORT", 00:27:32.080 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.080 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.080 "hdgst": ${hdgst:-false}, 00:27:32.080 "ddgst": ${ddgst:-false} 00:27:32.080 }, 00:27:32.080 "method": "bdev_nvme_attach_controller" 00:27:32.080 } 00:27:32.080 EOF 00:27:32.080 )") 00:27:32.080 01:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:32.080 01:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.080 01:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.080 { 00:27:32.080 "params": { 00:27:32.080 "name": "Nvme$subsystem", 00:27:32.080 "trtype": "$TEST_TRANSPORT", 00:27:32.080 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.080 "adrfam": "ipv4", 00:27:32.080 "trsvcid": "$NVMF_PORT", 00:27:32.080 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.080 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.080 "hdgst": ${hdgst:-false}, 00:27:32.080 "ddgst": ${ddgst:-false} 00:27:32.080 }, 00:27:32.080 "method": "bdev_nvme_attach_controller" 00:27:32.080 } 00:27:32.080 EOF 00:27:32.080 )") 00:27:32.080 01:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:32.080 01:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.080 01:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.080 { 00:27:32.080 "params": { 00:27:32.080 "name": "Nvme$subsystem", 00:27:32.080 "trtype": "$TEST_TRANSPORT", 00:27:32.080 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.080 "adrfam": "ipv4", 00:27:32.080 "trsvcid": "$NVMF_PORT", 00:27:32.080 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.080 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.080 "hdgst": ${hdgst:-false}, 00:27:32.080 "ddgst": ${ddgst:-false} 00:27:32.080 }, 00:27:32.080 "method": "bdev_nvme_attach_controller" 00:27:32.080 } 00:27:32.080 EOF 00:27:32.080 )") 00:27:32.080 01:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:32.080 01:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.080 01:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.080 { 00:27:32.080 "params": { 00:27:32.080 "name": "Nvme$subsystem", 00:27:32.080 "trtype": "$TEST_TRANSPORT", 00:27:32.080 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.080 "adrfam": "ipv4", 00:27:32.080 "trsvcid": "$NVMF_PORT", 00:27:32.080 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.080 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.080 "hdgst": ${hdgst:-false}, 00:27:32.080 "ddgst": ${ddgst:-false} 00:27:32.080 }, 00:27:32.080 "method": "bdev_nvme_attach_controller" 00:27:32.080 } 00:27:32.080 EOF 00:27:32.080 )") 00:27:32.080 01:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:32.080 01:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.080 01:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.080 { 00:27:32.080 "params": { 00:27:32.080 "name": "Nvme$subsystem", 00:27:32.080 "trtype": "$TEST_TRANSPORT", 00:27:32.080 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.080 "adrfam": "ipv4", 00:27:32.080 "trsvcid": "$NVMF_PORT", 00:27:32.080 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.080 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.080 "hdgst": ${hdgst:-false}, 00:27:32.080 "ddgst": ${ddgst:-false} 00:27:32.080 }, 00:27:32.080 "method": "bdev_nvme_attach_controller" 00:27:32.080 } 00:27:32.080 EOF 00:27:32.080 )") 00:27:32.080 01:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:32.080 01:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.080 01:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.080 { 00:27:32.080 "params": { 00:27:32.080 "name": "Nvme$subsystem", 00:27:32.080 "trtype": "$TEST_TRANSPORT", 00:27:32.080 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.080 "adrfam": "ipv4", 00:27:32.080 "trsvcid": "$NVMF_PORT", 00:27:32.080 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.080 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.080 "hdgst": ${hdgst:-false}, 00:27:32.080 "ddgst": ${ddgst:-false} 00:27:32.080 }, 00:27:32.080 "method": "bdev_nvme_attach_controller" 00:27:32.080 } 00:27:32.080 EOF 00:27:32.080 )") 00:27:32.080 01:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:32.080 01:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.080 01:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.080 { 00:27:32.080 "params": { 00:27:32.080 "name": "Nvme$subsystem", 00:27:32.080 "trtype": "$TEST_TRANSPORT", 00:27:32.080 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.080 "adrfam": "ipv4", 00:27:32.080 "trsvcid": "$NVMF_PORT", 00:27:32.080 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.080 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.080 "hdgst": ${hdgst:-false}, 00:27:32.080 "ddgst": ${ddgst:-false} 00:27:32.080 }, 00:27:32.080 "method": "bdev_nvme_attach_controller" 00:27:32.080 } 00:27:32.080 EOF 00:27:32.080 )") 00:27:32.080 01:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:32.080 01:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:32.080 01:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:32.080 01:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:32.080 "params": { 00:27:32.080 "name": "Nvme1", 00:27:32.080 "trtype": "tcp", 00:27:32.080 "traddr": "10.0.0.2", 00:27:32.080 "adrfam": "ipv4", 00:27:32.080 "trsvcid": "4420", 00:27:32.080 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:32.080 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:32.080 "hdgst": false, 00:27:32.080 "ddgst": false 00:27:32.080 }, 00:27:32.080 "method": "bdev_nvme_attach_controller" 00:27:32.080 },{ 00:27:32.080 "params": { 00:27:32.080 "name": "Nvme2", 00:27:32.080 "trtype": "tcp", 00:27:32.080 "traddr": "10.0.0.2", 00:27:32.080 "adrfam": "ipv4", 00:27:32.080 "trsvcid": "4420", 00:27:32.080 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:32.080 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:32.080 "hdgst": false, 00:27:32.080 "ddgst": false 00:27:32.080 }, 00:27:32.080 "method": "bdev_nvme_attach_controller" 00:27:32.080 },{ 00:27:32.080 "params": { 00:27:32.080 "name": "Nvme3", 00:27:32.080 "trtype": "tcp", 00:27:32.080 "traddr": "10.0.0.2", 00:27:32.081 "adrfam": "ipv4", 00:27:32.081 "trsvcid": "4420", 00:27:32.081 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:32.081 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:32.081 "hdgst": false, 00:27:32.081 "ddgst": false 00:27:32.081 }, 00:27:32.081 "method": "bdev_nvme_attach_controller" 00:27:32.081 },{ 00:27:32.081 "params": { 00:27:32.081 "name": "Nvme4", 00:27:32.081 "trtype": "tcp", 00:27:32.081 "traddr": "10.0.0.2", 00:27:32.081 "adrfam": "ipv4", 00:27:32.081 "trsvcid": "4420", 00:27:32.081 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:32.081 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:32.081 "hdgst": false, 00:27:32.081 "ddgst": false 00:27:32.081 }, 00:27:32.081 "method": "bdev_nvme_attach_controller" 00:27:32.081 },{ 00:27:32.081 "params": { 00:27:32.081 "name": "Nvme5", 00:27:32.081 "trtype": "tcp", 00:27:32.081 "traddr": "10.0.0.2", 00:27:32.081 "adrfam": "ipv4", 00:27:32.081 "trsvcid": "4420", 00:27:32.081 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:32.081 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:32.081 "hdgst": false, 00:27:32.081 "ddgst": false 00:27:32.081 }, 00:27:32.081 "method": "bdev_nvme_attach_controller" 00:27:32.081 },{ 00:27:32.081 "params": { 00:27:32.081 "name": "Nvme6", 00:27:32.081 "trtype": "tcp", 00:27:32.081 "traddr": "10.0.0.2", 00:27:32.081 "adrfam": "ipv4", 00:27:32.081 "trsvcid": "4420", 00:27:32.081 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:32.081 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:32.081 "hdgst": false, 00:27:32.081 "ddgst": false 00:27:32.081 }, 00:27:32.081 "method": "bdev_nvme_attach_controller" 00:27:32.081 },{ 00:27:32.081 "params": { 00:27:32.081 "name": "Nvme7", 00:27:32.081 "trtype": "tcp", 00:27:32.081 "traddr": "10.0.0.2", 00:27:32.081 "adrfam": "ipv4", 00:27:32.081 "trsvcid": "4420", 00:27:32.081 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:32.081 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:32.081 "hdgst": false, 00:27:32.081 "ddgst": false 00:27:32.081 }, 00:27:32.081 "method": "bdev_nvme_attach_controller" 00:27:32.081 },{ 00:27:32.081 "params": { 00:27:32.081 "name": "Nvme8", 00:27:32.081 "trtype": "tcp", 00:27:32.081 "traddr": "10.0.0.2", 00:27:32.081 "adrfam": "ipv4", 00:27:32.081 "trsvcid": "4420", 00:27:32.081 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:32.081 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:32.081 "hdgst": false, 00:27:32.081 "ddgst": false 00:27:32.081 }, 00:27:32.081 "method": "bdev_nvme_attach_controller" 00:27:32.081 },{ 00:27:32.081 "params": { 00:27:32.081 "name": "Nvme9", 00:27:32.081 "trtype": "tcp", 00:27:32.081 "traddr": "10.0.0.2", 00:27:32.081 "adrfam": "ipv4", 00:27:32.081 "trsvcid": "4420", 00:27:32.081 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:32.081 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:32.081 "hdgst": false, 00:27:32.081 "ddgst": false 00:27:32.081 }, 00:27:32.081 "method": "bdev_nvme_attach_controller" 00:27:32.081 },{ 00:27:32.081 "params": { 00:27:32.081 "name": "Nvme10", 00:27:32.081 "trtype": "tcp", 00:27:32.081 "traddr": "10.0.0.2", 00:27:32.081 "adrfam": "ipv4", 00:27:32.081 "trsvcid": "4420", 00:27:32.081 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:32.081 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:32.081 "hdgst": false, 00:27:32.081 "ddgst": false 00:27:32.081 }, 00:27:32.081 "method": "bdev_nvme_attach_controller" 00:27:32.081 }' 00:27:32.081 [2024-07-26 01:10:02.342294] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:27:32.081 [2024-07-26 01:10:02.342402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1910526 ] 00:27:32.081 EAL: No free 2048 kB hugepages reported on node 1 00:27:32.081 [2024-07-26 01:10:02.407833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.081 [2024-07-26 01:10:02.493912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.457 Running I/O for 1 seconds... 00:27:34.829 00:27:34.829 Latency(us) 00:27:34.829 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:34.829 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:34.829 Verification LBA range: start 0x0 length 0x400 00:27:34.829 Nvme1n1 : 1.21 212.32 13.27 0.00 0.00 298735.12 21554.06 262532.36 00:27:34.829 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:34.829 Verification LBA range: start 0x0 length 0x400 00:27:34.829 Nvme2n1 : 1.19 268.78 16.80 0.00 0.00 230365.56 20388.98 243891.01 00:27:34.829 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:34.829 Verification LBA range: start 0x0 length 0x400 00:27:34.829 Nvme3n1 : 1.20 267.67 16.73 0.00 0.00 225381.26 10437.21 245444.46 00:27:34.829 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:34.829 Verification LBA range: start 0x0 length 0x400 00:27:34.829 Nvme4n1 : 1.18 216.31 13.52 0.00 0.00 279449.98 23010.42 251658.24 00:27:34.829 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:34.829 Verification LBA range: start 0x0 length 0x400 00:27:34.829 Nvme5n1 : 1.22 210.08 13.13 0.00 0.00 283636.81 23398.78 281173.71 00:27:34.829 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:34.829 Verification LBA range: start 0x0 length 0x400 00:27:34.829 Nvme6n1 : 1.10 236.73 14.80 0.00 0.00 244128.32 6068.15 257872.02 00:27:34.829 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:34.829 Verification LBA range: start 0x0 length 0x400 00:27:34.829 Nvme7n1 : 1.22 263.30 16.46 0.00 0.00 218306.18 18738.44 254765.13 00:27:34.829 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:34.829 Verification LBA range: start 0x0 length 0x400 00:27:34.829 Nvme8n1 : 1.20 213.22 13.33 0.00 0.00 265491.34 26214.40 248551.35 00:27:34.829 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:34.829 Verification LBA range: start 0x0 length 0x400 00:27:34.829 Nvme9n1 : 1.23 208.93 13.06 0.00 0.00 266936.13 22524.97 292047.83 00:27:34.829 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:34.829 Verification LBA range: start 0x0 length 0x400 00:27:34.829 Nvme10n1 : 1.22 261.35 16.33 0.00 0.00 209855.68 17185.00 242337.56 00:27:34.829 =================================================================================================================== 00:27:34.829 Total : 2358.68 147.42 0.00 0.00 249380.14 6068.15 292047.83 00:27:34.829 01:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:27:34.829 01:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:34.829 01:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:34.829 01:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:34.829 01:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:34.829 01:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:34.829 01:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:27:34.829 01:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:34.829 01:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:27:34.829 01:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:34.829 01:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:35.097 rmmod nvme_tcp 00:27:35.097 rmmod nvme_fabrics 00:27:35.097 rmmod nvme_keyring 00:27:35.097 01:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:35.097 01:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:27:35.097 01:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:27:35.097 01:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1909928 ']' 00:27:35.097 01:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1909928 00:27:35.097 01:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 1909928 ']' 00:27:35.097 01:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 1909928 00:27:35.097 01:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:27:35.097 01:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:35.097 01:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1909928 00:27:35.097 01:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:35.097 01:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:35.097 01:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1909928' 00:27:35.097 killing process with pid 1909928 00:27:35.097 01:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 1909928 00:27:35.097 01:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 1909928 00:27:35.667 01:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:35.667 01:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:35.667 01:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:35.667 01:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:35.667 01:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:35.667 01:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.667 01:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:35.667 01:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:37.567 00:27:37.567 real 0m11.703s 00:27:37.567 user 0m33.006s 00:27:37.567 sys 0m3.411s 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:37.567 ************************************ 00:27:37.567 END TEST nvmf_shutdown_tc1 00:27:37.567 ************************************ 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:37.567 ************************************ 00:27:37.567 START TEST nvmf_shutdown_tc2 00:27:37.567 ************************************ 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:37.567 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:37.567 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:37.567 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:37.568 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:37.568 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:37.568 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.568 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:37.568 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:37.568 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:37.568 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:37.568 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.568 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:37.568 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:37.568 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.568 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:37.568 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.568 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:37.568 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:37.568 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:37.568 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:37.568 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.568 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:37.568 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:37.568 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.568 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:37.568 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:37.568 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:37.568 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:37.568 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:37.568 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:37.568 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:37.568 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:37.568 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:37.568 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:37.568 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:37.568 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:37.568 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:37.568 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:37.568 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:37.568 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:37.568 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:37.568 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:37.568 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:37.568 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:37.568 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:37.826 01:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:37.826 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:37.826 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:37.826 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:37.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:37.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:27:37.826 00:27:37.826 --- 10.0.0.2 ping statistics --- 00:27:37.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.826 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:27:37.826 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:37.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:37.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:27:37.826 00:27:37.826 --- 10.0.0.1 ping statistics --- 00:27:37.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.826 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:27:37.826 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:37.826 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:27:37.826 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:37.826 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:37.826 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:37.826 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:37.826 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:37.826 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:37.826 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:37.826 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:37.826 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:37.826 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:37.826 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:37.826 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1911294 00:27:37.826 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:37.826 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1911294 00:27:37.826 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1911294 ']' 00:27:37.826 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:37.826 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:37.826 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:37.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:37.827 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:37.827 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:37.827 [2024-07-26 01:10:08.123991] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:27:37.827 [2024-07-26 01:10:08.124087] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:37.827 EAL: No free 2048 kB hugepages reported on node 1 00:27:37.827 [2024-07-26 01:10:08.186621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:38.085 [2024-07-26 01:10:08.272553] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:38.085 [2024-07-26 01:10:08.272614] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:38.085 [2024-07-26 01:10:08.272651] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:38.085 [2024-07-26 01:10:08.272663] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:38.085 [2024-07-26 01:10:08.272673] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:38.085 [2024-07-26 01:10:08.272724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:38.085 [2024-07-26 01:10:08.272782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:38.085 [2024-07-26 01:10:08.272847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:38.085 [2024-07-26 01:10:08.272849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:38.085 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:38.085 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:27:38.085 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:38.085 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:38.085 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:38.085 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:38.085 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:38.085 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.085 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:38.085 [2024-07-26 01:10:08.427594] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:38.085 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.085 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:38.085 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:38.085 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:38.085 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:38.085 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:38.085 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:38.085 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:38.085 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:38.085 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:38.085 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:38.085 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:38.085 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:38.085 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:38.085 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:38.085 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:38.086 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:38.086 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:38.086 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:38.086 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:38.086 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:38.086 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:38.086 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:38.086 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:38.086 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:38.086 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:38.086 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:38.086 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.086 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:38.086 Malloc1 00:27:38.344 [2024-07-26 01:10:08.512897] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:38.344 Malloc2 00:27:38.344 Malloc3 00:27:38.344 Malloc4 00:27:38.344 Malloc5 00:27:38.344 Malloc6 00:27:38.603 Malloc7 00:27:38.603 Malloc8 00:27:38.603 Malloc9 00:27:38.603 Malloc10 00:27:38.603 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.603 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:38.603 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:38.603 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:38.603 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1911381 00:27:38.603 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1911381 /var/tmp/bdevperf.sock 00:27:38.603 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1911381 ']' 00:27:38.603 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:38.603 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:38.603 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:38.603 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:38.603 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:27:38.603 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:38.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:38.603 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:27:38.603 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:38.603 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.603 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:38.603 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.603 { 00:27:38.603 "params": { 00:27:38.603 "name": "Nvme$subsystem", 00:27:38.603 "trtype": "$TEST_TRANSPORT", 00:27:38.603 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.603 "adrfam": "ipv4", 00:27:38.603 "trsvcid": "$NVMF_PORT", 00:27:38.603 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.603 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.603 "hdgst": ${hdgst:-false}, 00:27:38.603 "ddgst": ${ddgst:-false} 00:27:38.603 }, 00:27:38.603 "method": "bdev_nvme_attach_controller" 00:27:38.603 } 00:27:38.603 EOF 00:27:38.603 )") 00:27:38.603 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:38.603 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.603 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.603 { 00:27:38.603 "params": { 00:27:38.603 "name": "Nvme$subsystem", 00:27:38.603 "trtype": "$TEST_TRANSPORT", 00:27:38.603 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.603 "adrfam": "ipv4", 00:27:38.603 "trsvcid": "$NVMF_PORT", 00:27:38.603 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.603 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.603 "hdgst": ${hdgst:-false}, 00:27:38.603 "ddgst": ${ddgst:-false} 00:27:38.603 }, 00:27:38.603 "method": "bdev_nvme_attach_controller" 00:27:38.603 } 00:27:38.603 EOF 00:27:38.603 )") 00:27:38.603 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:38.603 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.603 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.603 { 00:27:38.603 "params": { 00:27:38.603 "name": "Nvme$subsystem", 00:27:38.603 "trtype": "$TEST_TRANSPORT", 00:27:38.603 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.603 "adrfam": "ipv4", 00:27:38.603 "trsvcid": "$NVMF_PORT", 00:27:38.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.604 "hdgst": ${hdgst:-false}, 00:27:38.604 "ddgst": ${ddgst:-false} 00:27:38.604 }, 00:27:38.604 "method": "bdev_nvme_attach_controller" 00:27:38.604 } 00:27:38.604 EOF 00:27:38.604 )") 00:27:38.604 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:38.604 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.604 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.604 { 00:27:38.604 "params": { 00:27:38.604 "name": "Nvme$subsystem", 00:27:38.604 "trtype": "$TEST_TRANSPORT", 00:27:38.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.604 "adrfam": "ipv4", 00:27:38.604 "trsvcid": "$NVMF_PORT", 00:27:38.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.604 "hdgst": ${hdgst:-false}, 00:27:38.604 "ddgst": ${ddgst:-false} 00:27:38.604 }, 00:27:38.604 "method": "bdev_nvme_attach_controller" 00:27:38.604 } 00:27:38.604 EOF 00:27:38.604 )") 00:27:38.604 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:38.604 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.604 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.604 { 00:27:38.604 "params": { 00:27:38.604 "name": "Nvme$subsystem", 00:27:38.604 "trtype": "$TEST_TRANSPORT", 00:27:38.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.604 "adrfam": "ipv4", 00:27:38.604 "trsvcid": "$NVMF_PORT", 00:27:38.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.604 "hdgst": ${hdgst:-false}, 00:27:38.604 "ddgst": ${ddgst:-false} 00:27:38.604 }, 00:27:38.604 "method": "bdev_nvme_attach_controller" 00:27:38.604 } 00:27:38.604 EOF 00:27:38.604 )") 00:27:38.604 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:38.604 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.604 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.604 { 00:27:38.604 "params": { 00:27:38.604 "name": "Nvme$subsystem", 00:27:38.604 "trtype": "$TEST_TRANSPORT", 00:27:38.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.604 "adrfam": "ipv4", 00:27:38.604 "trsvcid": "$NVMF_PORT", 00:27:38.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.604 "hdgst": ${hdgst:-false}, 00:27:38.604 "ddgst": ${ddgst:-false} 00:27:38.604 }, 00:27:38.604 "method": "bdev_nvme_attach_controller" 00:27:38.604 } 00:27:38.604 EOF 00:27:38.604 )") 00:27:38.604 01:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:38.604 01:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.604 01:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.604 { 00:27:38.604 "params": { 00:27:38.604 "name": "Nvme$subsystem", 00:27:38.604 "trtype": "$TEST_TRANSPORT", 00:27:38.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.604 "adrfam": "ipv4", 00:27:38.604 "trsvcid": "$NVMF_PORT", 00:27:38.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.604 "hdgst": ${hdgst:-false}, 00:27:38.604 "ddgst": ${ddgst:-false} 00:27:38.604 }, 00:27:38.604 "method": "bdev_nvme_attach_controller" 00:27:38.604 } 00:27:38.604 EOF 00:27:38.604 )") 00:27:38.604 01:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:38.604 01:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.604 01:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.604 { 00:27:38.604 "params": { 00:27:38.604 "name": "Nvme$subsystem", 00:27:38.604 "trtype": "$TEST_TRANSPORT", 00:27:38.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.604 "adrfam": "ipv4", 00:27:38.604 "trsvcid": "$NVMF_PORT", 00:27:38.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.604 "hdgst": ${hdgst:-false}, 00:27:38.604 "ddgst": ${ddgst:-false} 00:27:38.604 }, 00:27:38.604 "method": "bdev_nvme_attach_controller" 00:27:38.604 } 00:27:38.604 EOF 00:27:38.604 )") 00:27:38.604 01:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:38.604 01:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.604 01:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.604 { 00:27:38.604 "params": { 00:27:38.604 "name": "Nvme$subsystem", 00:27:38.604 "trtype": "$TEST_TRANSPORT", 00:27:38.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.604 "adrfam": "ipv4", 00:27:38.604 "trsvcid": "$NVMF_PORT", 00:27:38.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.604 "hdgst": ${hdgst:-false}, 00:27:38.604 "ddgst": ${ddgst:-false} 00:27:38.604 }, 00:27:38.604 "method": "bdev_nvme_attach_controller" 00:27:38.604 } 00:27:38.604 EOF 00:27:38.604 )") 00:27:38.604 01:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:38.604 01:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.604 01:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.604 { 00:27:38.604 "params": { 00:27:38.604 "name": "Nvme$subsystem", 00:27:38.604 "trtype": "$TEST_TRANSPORT", 00:27:38.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.604 "adrfam": "ipv4", 00:27:38.604 "trsvcid": "$NVMF_PORT", 00:27:38.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.604 "hdgst": ${hdgst:-false}, 00:27:38.604 "ddgst": ${ddgst:-false} 00:27:38.604 }, 00:27:38.604 "method": "bdev_nvme_attach_controller" 00:27:38.604 } 00:27:38.604 EOF 00:27:38.604 )") 00:27:38.604 01:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:38.604 01:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:27:38.604 01:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:27:38.604 01:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:38.604 "params": { 00:27:38.604 "name": "Nvme1", 00:27:38.604 "trtype": "tcp", 00:27:38.604 "traddr": "10.0.0.2", 00:27:38.604 "adrfam": "ipv4", 00:27:38.604 "trsvcid": "4420", 00:27:38.604 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:38.604 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:38.604 "hdgst": false, 00:27:38.604 "ddgst": false 00:27:38.604 }, 00:27:38.604 "method": "bdev_nvme_attach_controller" 00:27:38.604 },{ 00:27:38.604 "params": { 00:27:38.604 "name": "Nvme2", 00:27:38.604 "trtype": "tcp", 00:27:38.604 "traddr": "10.0.0.2", 00:27:38.604 "adrfam": "ipv4", 00:27:38.604 "trsvcid": "4420", 00:27:38.604 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:38.604 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:38.604 "hdgst": false, 00:27:38.604 "ddgst": false 00:27:38.604 }, 00:27:38.604 "method": "bdev_nvme_attach_controller" 00:27:38.604 },{ 00:27:38.604 "params": { 00:27:38.604 "name": "Nvme3", 00:27:38.604 "trtype": "tcp", 00:27:38.604 "traddr": "10.0.0.2", 00:27:38.604 "adrfam": "ipv4", 00:27:38.604 "trsvcid": "4420", 00:27:38.604 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:38.604 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:38.604 "hdgst": false, 00:27:38.604 "ddgst": false 00:27:38.604 }, 00:27:38.604 "method": "bdev_nvme_attach_controller" 00:27:38.604 },{ 00:27:38.604 "params": { 00:27:38.604 "name": "Nvme4", 00:27:38.604 "trtype": "tcp", 00:27:38.604 "traddr": "10.0.0.2", 00:27:38.605 "adrfam": "ipv4", 00:27:38.605 "trsvcid": "4420", 00:27:38.605 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:38.605 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:38.605 "hdgst": false, 00:27:38.605 "ddgst": false 00:27:38.605 }, 00:27:38.605 "method": "bdev_nvme_attach_controller" 00:27:38.605 },{ 00:27:38.605 "params": { 00:27:38.605 "name": "Nvme5", 00:27:38.605 "trtype": "tcp", 00:27:38.605 "traddr": "10.0.0.2", 00:27:38.605 "adrfam": "ipv4", 00:27:38.605 "trsvcid": "4420", 00:27:38.605 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:38.605 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:38.605 "hdgst": false, 00:27:38.605 "ddgst": false 00:27:38.605 }, 00:27:38.605 "method": "bdev_nvme_attach_controller" 00:27:38.605 },{ 00:27:38.605 "params": { 00:27:38.605 "name": "Nvme6", 00:27:38.605 "trtype": "tcp", 00:27:38.605 "traddr": "10.0.0.2", 00:27:38.605 "adrfam": "ipv4", 00:27:38.605 "trsvcid": "4420", 00:27:38.605 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:38.605 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:38.605 "hdgst": false, 00:27:38.605 "ddgst": false 00:27:38.605 }, 00:27:38.605 "method": "bdev_nvme_attach_controller" 00:27:38.605 },{ 00:27:38.605 "params": { 00:27:38.605 "name": "Nvme7", 00:27:38.605 "trtype": "tcp", 00:27:38.605 "traddr": "10.0.0.2", 00:27:38.605 "adrfam": "ipv4", 00:27:38.605 "trsvcid": "4420", 00:27:38.605 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:38.605 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:38.605 "hdgst": false, 00:27:38.605 "ddgst": false 00:27:38.605 }, 00:27:38.605 "method": "bdev_nvme_attach_controller" 00:27:38.605 },{ 00:27:38.605 "params": { 00:27:38.605 "name": "Nvme8", 00:27:38.605 "trtype": "tcp", 00:27:38.605 "traddr": "10.0.0.2", 00:27:38.605 "adrfam": "ipv4", 00:27:38.605 "trsvcid": "4420", 00:27:38.605 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:38.605 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:38.605 "hdgst": false, 00:27:38.605 "ddgst": false 00:27:38.605 }, 00:27:38.605 "method": "bdev_nvme_attach_controller" 00:27:38.605 },{ 00:27:38.605 "params": { 00:27:38.605 "name": "Nvme9", 00:27:38.605 "trtype": "tcp", 00:27:38.605 "traddr": "10.0.0.2", 00:27:38.605 "adrfam": "ipv4", 00:27:38.605 "trsvcid": "4420", 00:27:38.605 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:38.605 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:38.605 "hdgst": false, 00:27:38.605 "ddgst": false 00:27:38.605 }, 00:27:38.605 "method": "bdev_nvme_attach_controller" 00:27:38.605 },{ 00:27:38.605 "params": { 00:27:38.605 "name": "Nvme10", 00:27:38.605 "trtype": "tcp", 00:27:38.605 "traddr": "10.0.0.2", 00:27:38.605 "adrfam": "ipv4", 00:27:38.605 "trsvcid": "4420", 00:27:38.605 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:38.605 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:38.605 "hdgst": false, 00:27:38.605 "ddgst": false 00:27:38.605 }, 00:27:38.605 "method": "bdev_nvme_attach_controller" 00:27:38.605 }' 00:27:38.605 [2024-07-26 01:10:09.025848] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:27:38.605 [2024-07-26 01:10:09.025923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1911381 ] 00:27:38.863 EAL: No free 2048 kB hugepages reported on node 1 00:27:38.863 [2024-07-26 01:10:09.091208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.863 [2024-07-26 01:10:09.178374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:40.757 Running I/O for 10 seconds... 00:27:40.757 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:40.757 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:27:40.757 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:40.757 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.757 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:40.757 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.757 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:40.757 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:40.757 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:40.757 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:27:40.757 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:27:40.757 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:40.757 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:40.757 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:40.757 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:41.015 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.015 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:41.015 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.015 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:41.015 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:41.015 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:41.273 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:41.273 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:41.273 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:41.273 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:41.273 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.273 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:41.273 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.273 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:41.273 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:41.273 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:41.530 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:41.530 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:41.531 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:41.531 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:41.531 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.531 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:41.531 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.531 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=135 00:27:41.531 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 135 -ge 100 ']' 00:27:41.531 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:27:41.531 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:27:41.531 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:27:41.531 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1911381 00:27:41.531 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1911381 ']' 00:27:41.531 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1911381 00:27:41.531 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:27:41.531 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:41.531 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1911381 00:27:41.531 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:41.531 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:41.531 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1911381' 00:27:41.531 killing process with pid 1911381 00:27:41.531 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1911381 00:27:41.531 01:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1911381 00:27:41.531 Received shutdown signal, test time was about 0.922875 seconds 00:27:41.531 00:27:41.531 Latency(us) 00:27:41.531 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:41.531 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:41.531 Verification LBA range: start 0x0 length 0x400 00:27:41.531 Nvme1n1 : 0.90 217.47 13.59 0.00 0.00 286507.74 13689.74 251658.24 00:27:41.531 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:41.531 Verification LBA range: start 0x0 length 0x400 00:27:41.531 Nvme2n1 : 0.87 219.81 13.74 0.00 0.00 281595.70 20486.07 251658.24 00:27:41.531 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:41.531 Verification LBA range: start 0x0 length 0x400 00:27:41.531 Nvme3n1 : 0.91 285.37 17.84 0.00 0.00 212147.39 6213.78 245444.46 00:27:41.531 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:41.531 Verification LBA range: start 0x0 length 0x400 00:27:41.531 Nvme4n1 : 0.92 278.53 17.41 0.00 0.00 213256.72 22913.33 267192.70 00:27:41.531 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:41.531 Verification LBA range: start 0x0 length 0x400 00:27:41.531 Nvme5n1 : 0.92 277.64 17.35 0.00 0.00 209255.35 18447.17 250104.79 00:27:41.531 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:41.531 Verification LBA range: start 0x0 length 0x400 00:27:41.531 Nvme6n1 : 0.88 217.23 13.58 0.00 0.00 260483.86 22816.24 250104.79 00:27:41.531 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:41.531 Verification LBA range: start 0x0 length 0x400 00:27:41.531 Nvme7n1 : 0.89 215.10 13.44 0.00 0.00 256999.98 22524.97 233016.89 00:27:41.531 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:41.531 Verification LBA range: start 0x0 length 0x400 00:27:41.531 Nvme8n1 : 0.90 214.08 13.38 0.00 0.00 252607.91 33787.45 236123.78 00:27:41.531 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:41.531 Verification LBA range: start 0x0 length 0x400 00:27:41.531 Nvme9n1 : 0.91 210.77 13.17 0.00 0.00 251643.58 21068.61 284280.60 00:27:41.531 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:41.531 Verification LBA range: start 0x0 length 0x400 00:27:41.531 Nvme10n1 : 0.90 213.42 13.34 0.00 0.00 241901.67 17573.36 253211.69 00:27:41.531 =================================================================================================================== 00:27:41.531 Total : 2349.43 146.84 0.00 0.00 243457.69 6213.78 284280.60 00:27:41.788 01:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:27:42.718 01:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1911294 00:27:42.718 01:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:27:42.718 01:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:42.718 01:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:42.718 01:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:42.718 01:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:42.718 01:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:42.718 01:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:27:42.718 01:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:42.718 01:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:27:42.718 01:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:42.718 01:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:42.718 rmmod nvme_tcp 00:27:42.976 rmmod nvme_fabrics 00:27:42.976 rmmod nvme_keyring 00:27:42.976 01:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:42.976 01:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:27:42.976 01:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:27:42.976 01:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1911294 ']' 00:27:42.976 01:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1911294 00:27:42.976 01:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1911294 ']' 00:27:42.976 01:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1911294 00:27:42.976 01:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:27:42.976 01:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:42.976 01:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1911294 00:27:42.976 01:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:42.976 01:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:42.976 01:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1911294' 00:27:42.976 killing process with pid 1911294 00:27:42.976 01:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1911294 00:27:42.976 01:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1911294 00:27:43.539 01:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:43.539 01:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:43.539 01:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:43.539 01:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:43.539 01:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:43.540 01:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:43.540 01:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:43.540 01:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:45.440 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:45.440 00:27:45.440 real 0m7.814s 00:27:45.440 user 0m23.977s 00:27:45.440 sys 0m1.510s 00:27:45.440 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:45.440 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:45.440 ************************************ 00:27:45.440 END TEST nvmf_shutdown_tc2 00:27:45.440 ************************************ 00:27:45.440 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:45.440 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:45.440 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:45.440 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:45.440 ************************************ 00:27:45.440 START TEST nvmf_shutdown_tc3 00:27:45.440 ************************************ 00:27:45.440 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:27:45.440 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:27:45.440 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:45.440 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:45.440 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:45.440 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:45.440 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:45.440 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:45.440 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:45.440 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:45.440 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:45.440 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:45.440 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:45.440 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:45.440 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:45.440 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:45.440 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:45.440 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:45.440 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:45.440 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:45.440 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:45.440 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:45.440 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:45.441 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:45.441 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:45.441 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:45.441 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:45.441 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:45.700 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:45.700 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:45.700 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:45.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:45.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:27:45.700 00:27:45.700 --- 10.0.0.2 ping statistics --- 00:27:45.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.700 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:27:45.700 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:45.700 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:45.700 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:27:45.700 00:27:45.700 --- 10.0.0.1 ping statistics --- 00:27:45.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.700 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:27:45.700 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:45.700 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:27:45.700 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:45.700 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:45.700 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:45.700 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:45.700 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:45.700 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:45.700 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:45.700 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:45.700 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:45.700 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:45.700 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:45.700 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1912390 00:27:45.700 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:45.700 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1912390 00:27:45.700 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1912390 ']' 00:27:45.700 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:45.700 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:45.700 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:45.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:45.700 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:45.700 01:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:45.700 [2024-07-26 01:10:15.966351] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:27:45.700 [2024-07-26 01:10:15.966435] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:45.700 EAL: No free 2048 kB hugepages reported on node 1 00:27:45.700 [2024-07-26 01:10:16.033926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:45.959 [2024-07-26 01:10:16.131215] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:45.959 [2024-07-26 01:10:16.131272] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:45.959 [2024-07-26 01:10:16.131289] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:45.959 [2024-07-26 01:10:16.131302] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:45.959 [2024-07-26 01:10:16.131313] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:45.959 [2024-07-26 01:10:16.131399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:45.959 [2024-07-26 01:10:16.134079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:45.959 [2024-07-26 01:10:16.134153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:45.959 [2024-07-26 01:10:16.134157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:45.959 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:45.959 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:27:45.959 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:45.959 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:45.959 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:45.959 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:45.959 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:45.959 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.959 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:45.959 [2024-07-26 01:10:16.290565] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:45.959 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.959 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:45.959 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:45.959 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:45.959 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:45.959 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:45.959 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:45.959 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:45.959 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:45.959 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:45.959 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:45.959 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:45.959 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:45.959 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:45.959 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:45.959 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:45.959 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:45.959 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:45.959 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:45.959 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:45.959 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:45.959 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:45.959 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:45.959 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:45.959 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:45.959 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:45.959 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:45.959 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.959 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:45.959 Malloc1 00:27:45.959 [2024-07-26 01:10:16.376139] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:46.218 Malloc2 00:27:46.218 Malloc3 00:27:46.218 Malloc4 00:27:46.218 Malloc5 00:27:46.218 Malloc6 00:27:46.477 Malloc7 00:27:46.477 Malloc8 00:27:46.477 Malloc9 00:27:46.477 Malloc10 00:27:46.477 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.477 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:46.477 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:46.477 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:46.477 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1912453 00:27:46.477 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1912453 /var/tmp/bdevperf.sock 00:27:46.477 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1912453 ']' 00:27:46.477 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:46.477 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:46.477 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:46.477 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:46.477 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:27:46.477 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:46.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:46.477 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:27:46.477 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:46.477 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:46.477 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:46.477 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:46.477 { 00:27:46.477 "params": { 00:27:46.477 "name": "Nvme$subsystem", 00:27:46.477 "trtype": "$TEST_TRANSPORT", 00:27:46.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.477 "adrfam": "ipv4", 00:27:46.477 "trsvcid": "$NVMF_PORT", 00:27:46.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.477 "hdgst": ${hdgst:-false}, 00:27:46.477 "ddgst": ${ddgst:-false} 00:27:46.477 }, 00:27:46.477 "method": "bdev_nvme_attach_controller" 00:27:46.477 } 00:27:46.477 EOF 00:27:46.477 )") 00:27:46.477 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:46.477 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:46.477 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:46.477 { 00:27:46.477 "params": { 00:27:46.477 "name": "Nvme$subsystem", 00:27:46.477 "trtype": "$TEST_TRANSPORT", 00:27:46.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.477 "adrfam": "ipv4", 00:27:46.477 "trsvcid": "$NVMF_PORT", 00:27:46.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.477 "hdgst": ${hdgst:-false}, 00:27:46.477 "ddgst": ${ddgst:-false} 00:27:46.477 }, 00:27:46.477 "method": "bdev_nvme_attach_controller" 00:27:46.477 } 00:27:46.477 EOF 00:27:46.477 )") 00:27:46.477 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:46.477 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:46.477 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:46.477 { 00:27:46.477 "params": { 00:27:46.477 "name": "Nvme$subsystem", 00:27:46.477 "trtype": "$TEST_TRANSPORT", 00:27:46.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.477 "adrfam": "ipv4", 00:27:46.477 "trsvcid": "$NVMF_PORT", 00:27:46.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.477 "hdgst": ${hdgst:-false}, 00:27:46.477 "ddgst": ${ddgst:-false} 00:27:46.477 }, 00:27:46.477 "method": "bdev_nvme_attach_controller" 00:27:46.477 } 00:27:46.477 EOF 00:27:46.477 )") 00:27:46.477 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:46.477 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:46.477 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:46.477 { 00:27:46.477 "params": { 00:27:46.477 "name": "Nvme$subsystem", 00:27:46.477 "trtype": "$TEST_TRANSPORT", 00:27:46.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.477 "adrfam": "ipv4", 00:27:46.477 "trsvcid": "$NVMF_PORT", 00:27:46.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.477 "hdgst": ${hdgst:-false}, 00:27:46.477 "ddgst": ${ddgst:-false} 00:27:46.477 }, 00:27:46.477 "method": "bdev_nvme_attach_controller" 00:27:46.477 } 00:27:46.477 EOF 00:27:46.477 )") 00:27:46.477 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:46.477 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:46.477 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:46.477 { 00:27:46.477 "params": { 00:27:46.477 "name": "Nvme$subsystem", 00:27:46.477 "trtype": "$TEST_TRANSPORT", 00:27:46.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.477 "adrfam": "ipv4", 00:27:46.477 "trsvcid": "$NVMF_PORT", 00:27:46.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.477 "hdgst": ${hdgst:-false}, 00:27:46.477 "ddgst": ${ddgst:-false} 00:27:46.477 }, 00:27:46.477 "method": "bdev_nvme_attach_controller" 00:27:46.477 } 00:27:46.477 EOF 00:27:46.477 )") 00:27:46.477 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:46.477 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:46.477 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:46.477 { 00:27:46.477 "params": { 00:27:46.477 "name": "Nvme$subsystem", 00:27:46.477 "trtype": "$TEST_TRANSPORT", 00:27:46.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.477 "adrfam": "ipv4", 00:27:46.477 "trsvcid": "$NVMF_PORT", 00:27:46.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.477 "hdgst": ${hdgst:-false}, 00:27:46.477 "ddgst": ${ddgst:-false} 00:27:46.477 }, 00:27:46.477 "method": "bdev_nvme_attach_controller" 00:27:46.477 } 00:27:46.477 EOF 00:27:46.477 )") 00:27:46.477 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:46.477 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:46.477 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:46.477 { 00:27:46.477 "params": { 00:27:46.477 "name": "Nvme$subsystem", 00:27:46.477 "trtype": "$TEST_TRANSPORT", 00:27:46.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.477 "adrfam": "ipv4", 00:27:46.477 "trsvcid": "$NVMF_PORT", 00:27:46.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.477 "hdgst": ${hdgst:-false}, 00:27:46.477 "ddgst": ${ddgst:-false} 00:27:46.477 }, 00:27:46.477 "method": "bdev_nvme_attach_controller" 00:27:46.477 } 00:27:46.477 EOF 00:27:46.477 )") 00:27:46.477 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:46.477 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:46.478 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:46.478 { 00:27:46.478 "params": { 00:27:46.478 "name": "Nvme$subsystem", 00:27:46.478 "trtype": "$TEST_TRANSPORT", 00:27:46.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.478 "adrfam": "ipv4", 00:27:46.478 "trsvcid": "$NVMF_PORT", 00:27:46.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.478 "hdgst": ${hdgst:-false}, 00:27:46.478 "ddgst": ${ddgst:-false} 00:27:46.478 }, 00:27:46.478 "method": "bdev_nvme_attach_controller" 00:27:46.478 } 00:27:46.478 EOF 00:27:46.478 )") 00:27:46.478 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:46.478 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:46.478 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:46.478 { 00:27:46.478 "params": { 00:27:46.478 "name": "Nvme$subsystem", 00:27:46.478 "trtype": "$TEST_TRANSPORT", 00:27:46.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.478 "adrfam": "ipv4", 00:27:46.478 "trsvcid": "$NVMF_PORT", 00:27:46.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.478 "hdgst": ${hdgst:-false}, 00:27:46.478 "ddgst": ${ddgst:-false} 00:27:46.478 }, 00:27:46.478 "method": "bdev_nvme_attach_controller" 00:27:46.478 } 00:27:46.478 EOF 00:27:46.478 )") 00:27:46.478 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:46.478 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:46.478 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:46.478 { 00:27:46.478 "params": { 00:27:46.478 "name": "Nvme$subsystem", 00:27:46.478 "trtype": "$TEST_TRANSPORT", 00:27:46.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.478 "adrfam": "ipv4", 00:27:46.478 "trsvcid": "$NVMF_PORT", 00:27:46.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.478 "hdgst": ${hdgst:-false}, 00:27:46.478 "ddgst": ${ddgst:-false} 00:27:46.478 }, 00:27:46.478 "method": "bdev_nvme_attach_controller" 00:27:46.478 } 00:27:46.478 EOF 00:27:46.478 )") 00:27:46.478 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:46.478 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:27:46.478 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:27:46.478 01:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:46.478 "params": { 00:27:46.478 "name": "Nvme1", 00:27:46.478 "trtype": "tcp", 00:27:46.478 "traddr": "10.0.0.2", 00:27:46.478 "adrfam": "ipv4", 00:27:46.478 "trsvcid": "4420", 00:27:46.478 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:46.478 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:46.478 "hdgst": false, 00:27:46.478 "ddgst": false 00:27:46.478 }, 00:27:46.478 "method": "bdev_nvme_attach_controller" 00:27:46.478 },{ 00:27:46.478 "params": { 00:27:46.478 "name": "Nvme2", 00:27:46.478 "trtype": "tcp", 00:27:46.478 "traddr": "10.0.0.2", 00:27:46.478 "adrfam": "ipv4", 00:27:46.478 "trsvcid": "4420", 00:27:46.478 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:46.478 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:46.478 "hdgst": false, 00:27:46.478 "ddgst": false 00:27:46.478 }, 00:27:46.478 "method": "bdev_nvme_attach_controller" 00:27:46.478 },{ 00:27:46.478 "params": { 00:27:46.478 "name": "Nvme3", 00:27:46.478 "trtype": "tcp", 00:27:46.478 "traddr": "10.0.0.2", 00:27:46.478 "adrfam": "ipv4", 00:27:46.478 "trsvcid": "4420", 00:27:46.478 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:46.478 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:46.478 "hdgst": false, 00:27:46.478 "ddgst": false 00:27:46.478 }, 00:27:46.478 "method": "bdev_nvme_attach_controller" 00:27:46.478 },{ 00:27:46.478 "params": { 00:27:46.478 "name": "Nvme4", 00:27:46.478 "trtype": "tcp", 00:27:46.478 "traddr": "10.0.0.2", 00:27:46.478 "adrfam": "ipv4", 00:27:46.478 "trsvcid": "4420", 00:27:46.478 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:46.478 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:46.478 "hdgst": false, 00:27:46.478 "ddgst": false 00:27:46.478 }, 00:27:46.478 "method": "bdev_nvme_attach_controller" 00:27:46.478 },{ 00:27:46.478 "params": { 00:27:46.478 "name": "Nvme5", 00:27:46.478 "trtype": "tcp", 00:27:46.478 "traddr": "10.0.0.2", 00:27:46.478 "adrfam": "ipv4", 00:27:46.478 "trsvcid": "4420", 00:27:46.478 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:46.478 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:46.478 "hdgst": false, 00:27:46.478 "ddgst": false 00:27:46.478 }, 00:27:46.478 "method": "bdev_nvme_attach_controller" 00:27:46.478 },{ 00:27:46.478 "params": { 00:27:46.478 "name": "Nvme6", 00:27:46.478 "trtype": "tcp", 00:27:46.478 "traddr": "10.0.0.2", 00:27:46.478 "adrfam": "ipv4", 00:27:46.478 "trsvcid": "4420", 00:27:46.478 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:46.478 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:46.478 "hdgst": false, 00:27:46.478 "ddgst": false 00:27:46.478 }, 00:27:46.478 "method": "bdev_nvme_attach_controller" 00:27:46.478 },{ 00:27:46.478 "params": { 00:27:46.478 "name": "Nvme7", 00:27:46.478 "trtype": "tcp", 00:27:46.478 "traddr": "10.0.0.2", 00:27:46.478 "adrfam": "ipv4", 00:27:46.478 "trsvcid": "4420", 00:27:46.478 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:46.478 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:46.478 "hdgst": false, 00:27:46.478 "ddgst": false 00:27:46.478 }, 00:27:46.478 "method": "bdev_nvme_attach_controller" 00:27:46.478 },{ 00:27:46.478 "params": { 00:27:46.478 "name": "Nvme8", 00:27:46.478 "trtype": "tcp", 00:27:46.478 "traddr": "10.0.0.2", 00:27:46.478 "adrfam": "ipv4", 00:27:46.478 "trsvcid": "4420", 00:27:46.478 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:46.478 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:46.478 "hdgst": false, 00:27:46.478 "ddgst": false 00:27:46.478 }, 00:27:46.478 "method": "bdev_nvme_attach_controller" 00:27:46.478 },{ 00:27:46.478 "params": { 00:27:46.478 "name": "Nvme9", 00:27:46.478 "trtype": "tcp", 00:27:46.478 "traddr": "10.0.0.2", 00:27:46.478 "adrfam": "ipv4", 00:27:46.478 "trsvcid": "4420", 00:27:46.478 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:46.478 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:46.478 "hdgst": false, 00:27:46.478 "ddgst": false 00:27:46.478 }, 00:27:46.478 "method": "bdev_nvme_attach_controller" 00:27:46.478 },{ 00:27:46.478 "params": { 00:27:46.478 "name": "Nvme10", 00:27:46.478 "trtype": "tcp", 00:27:46.478 "traddr": "10.0.0.2", 00:27:46.478 "adrfam": "ipv4", 00:27:46.478 "trsvcid": "4420", 00:27:46.478 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:46.478 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:46.478 "hdgst": false, 00:27:46.478 "ddgst": false 00:27:46.478 }, 00:27:46.478 "method": "bdev_nvme_attach_controller" 00:27:46.478 }' 00:27:46.478 [2024-07-26 01:10:16.889278] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:27:46.478 [2024-07-26 01:10:16.889370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1912453 ] 00:27:46.736 EAL: No free 2048 kB hugepages reported on node 1 00:27:46.736 [2024-07-26 01:10:16.955031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:46.736 [2024-07-26 01:10:17.042413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:48.634 Running I/O for 10 seconds... 00:27:48.634 01:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:48.634 01:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:27:48.634 01:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:48.634 01:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.634 01:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:48.634 01:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.634 01:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:48.634 01:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:48.634 01:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:48.634 01:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:48.634 01:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:27:48.634 01:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:27:48.634 01:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:48.634 01:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:48.634 01:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:48.634 01:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:48.634 01:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.634 01:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:48.634 01:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.634 01:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:48.634 01:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:48.634 01:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:48.891 01:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:48.891 01:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:48.891 01:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:48.891 01:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:48.891 01:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.891 01:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:48.891 01:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.891 01:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:48.891 01:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:48.891 01:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:49.148 01:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:49.148 01:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:49.148 01:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:49.148 01:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:49.148 01:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.149 01:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:49.149 01:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.149 01:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:49.149 01:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:49.149 01:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:27:49.149 01:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:27:49.149 01:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:27:49.149 01:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1912390 00:27:49.149 01:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1912390 ']' 00:27:49.149 01:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1912390 00:27:49.149 01:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:27:49.149 01:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:49.149 01:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1912390 00:27:49.415 01:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:49.415 01:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:49.415 01:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1912390' 00:27:49.415 killing process with pid 1912390 00:27:49.415 01:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 1912390 00:27:49.415 01:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 1912390 00:27:49.415 [2024-07-26 01:10:19.585771] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.415 [2024-07-26 01:10:19.585873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.415 [2024-07-26 01:10:19.585898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.415 [2024-07-26 01:10:19.585917] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.415 [2024-07-26 01:10:19.585929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.415 [2024-07-26 01:10:19.585954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.415 [2024-07-26 01:10:19.585968] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.415 [2024-07-26 01:10:19.585980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.415 [2024-07-26 01:10:19.585992] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.415 [2024-07-26 01:10:19.586005] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.415 [2024-07-26 01:10:19.586017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.415 [2024-07-26 01:10:19.586029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.415 [2024-07-26 01:10:19.586041] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.415 [2024-07-26 01:10:19.586074] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.415 [2024-07-26 01:10:19.586087] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.415 [2024-07-26 01:10:19.586100] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.415 [2024-07-26 01:10:19.586113] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.415 [2024-07-26 01:10:19.586125] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.415 [2024-07-26 01:10:19.586138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.415 [2024-07-26 01:10:19.586151] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.415 [2024-07-26 01:10:19.586164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.415 [2024-07-26 01:10:19.586177] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.415 [2024-07-26 01:10:19.586164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.415 [2024-07-26 01:10:19.586189] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.415 [2024-07-26 01:10:19.586201] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.415 [2024-07-26 01:10:19.586203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.415 [2024-07-26 01:10:19.586214] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.415 [2024-07-26 01:10:19.586226] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.415 [2024-07-26 01:10:19.586232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.415 [2024-07-26 01:10:19.586239] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.415 [2024-07-26 01:10:19.586249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.415 [2024-07-26 01:10:19.586252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.415 [2024-07-26 01:10:19.586266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.415 [2024-07-26 01:10:19.586269] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.415 [2024-07-26 01:10:19.586280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-26 01:10:19.586282] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.415 he state(5) to be set 00:27:49.415 [2024-07-26 01:10:19.586296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.415 [2024-07-26 01:10:19.586298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.415 [2024-07-26 01:10:19.586308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.415 [2024-07-26 01:10:19.586312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.415 [2024-07-26 01:10:19.586322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.415 [2024-07-26 01:10:19.586328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.415 [2024-07-26 01:10:19.586334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.415 [2024-07-26 01:10:19.586342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.415 [2024-07-26 01:10:19.586347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.415 [2024-07-26 01:10:19.586358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.415 [2024-07-26 01:10:19.586370] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.415 [2024-07-26 01:10:19.586380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.415 [2024-07-26 01:10:19.586383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.415 [2024-07-26 01:10:19.586396] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with t[2024-07-26 01:10:19.586397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:1he state(5) to be set 00:27:49.415 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.415 [2024-07-26 01:10:19.586411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.415 [2024-07-26 01:10:19.586413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.416 [2024-07-26 01:10:19.586424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.416 [2024-07-26 01:10:19.586429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-07-26 01:10:19.586437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.416 [2024-07-26 01:10:19.586443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.416 [2024-07-26 01:10:19.586450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.416 [2024-07-26 01:10:19.586459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-07-26 01:10:19.586465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.416 [2024-07-26 01:10:19.586473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.416 [2024-07-26 01:10:19.586478] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.416 [2024-07-26 01:10:19.586488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:1[2024-07-26 01:10:19.586490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 he state(5) to be set 00:27:49.416 [2024-07-26 01:10:19.586504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-26 01:10:19.586518] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.416 he state(5) to be set 00:27:49.416 [2024-07-26 01:10:19.586533] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.416 [2024-07-26 01:10:19.586536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-07-26 01:10:19.586546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.416 [2024-07-26 01:10:19.586549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.416 [2024-07-26 01:10:19.586558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.416 [2024-07-26 01:10:19.586564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-07-26 01:10:19.586571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.416 [2024-07-26 01:10:19.586578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.416 [2024-07-26 01:10:19.586583] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.416 [2024-07-26 01:10:19.586593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:1[2024-07-26 01:10:19.586595] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 he state(5) to be set 00:27:49.416 [2024-07-26 01:10:19.586608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-26 01:10:19.586608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.416 he state(5) to be set 00:27:49.416 [2024-07-26 01:10:19.586623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.416 [2024-07-26 01:10:19.586624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-07-26 01:10:19.586635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.416 [2024-07-26 01:10:19.586638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.416 [2024-07-26 01:10:19.586647] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.416 [2024-07-26 01:10:19.586653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-07-26 01:10:19.586662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.416 [2024-07-26 01:10:19.586666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.416 [2024-07-26 01:10:19.586675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.416 [2024-07-26 01:10:19.586681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-07-26 01:10:19.586687] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.416 [2024-07-26 01:10:19.586694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.416 [2024-07-26 01:10:19.586699] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.416 [2024-07-26 01:10:19.586709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:1[2024-07-26 01:10:19.586711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 he state(5) to be set 00:27:49.416 [2024-07-26 01:10:19.586724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.416 [2024-07-26 01:10:19.586726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.416 [2024-07-26 01:10:19.586735] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x649070 is same with the state(5) to be set 00:27:49.416 [2024-07-26 01:10:19.586742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-07-26 01:10:19.586756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.416 [2024-07-26 01:10:19.586771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-07-26 01:10:19.586784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.416 [2024-07-26 01:10:19.586799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-07-26 01:10:19.586812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.416 [2024-07-26 01:10:19.586827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-07-26 01:10:19.586840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.416 [2024-07-26 01:10:19.586855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-07-26 01:10:19.586868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.416 [2024-07-26 01:10:19.586883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-07-26 01:10:19.586896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.416 [2024-07-26 01:10:19.586912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-07-26 01:10:19.586932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.416 [2024-07-26 01:10:19.586948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-07-26 01:10:19.586962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.416 [2024-07-26 01:10:19.586978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-07-26 01:10:19.586992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.416 [2024-07-26 01:10:19.587007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-07-26 01:10:19.587020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.416 [2024-07-26 01:10:19.587035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-07-26 01:10:19.587074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.416 [2024-07-26 01:10:19.587092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-07-26 01:10:19.587106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.416 [2024-07-26 01:10:19.587121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-07-26 01:10:19.587134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.416 [2024-07-26 01:10:19.587149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-07-26 01:10:19.587163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.416 [2024-07-26 01:10:19.587178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-07-26 01:10:19.587192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.416 [2024-07-26 01:10:19.587207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-07-26 01:10:19.587222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.416 [2024-07-26 01:10:19.587237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.416 [2024-07-26 01:10:19.587251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.416 [2024-07-26 01:10:19.587266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.417 [2024-07-26 01:10:19.587280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.417 [2024-07-26 01:10:19.587295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.417 [2024-07-26 01:10:19.587309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.417 [2024-07-26 01:10:19.587328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.417 [2024-07-26 01:10:19.587342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.417 [2024-07-26 01:10:19.587357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.417 [2024-07-26 01:10:19.587387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.417 [2024-07-26 01:10:19.587403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.417 [2024-07-26 01:10:19.587416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.417 [2024-07-26 01:10:19.587431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.417 [2024-07-26 01:10:19.587444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.417 [2024-07-26 01:10:19.587459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.417 [2024-07-26 01:10:19.587472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.417 [2024-07-26 01:10:19.587487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.417 [2024-07-26 01:10:19.587500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.417 [2024-07-26 01:10:19.587515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.417 [2024-07-26 01:10:19.587528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.417 [2024-07-26 01:10:19.587542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.417 [2024-07-26 01:10:19.587555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.417 [2024-07-26 01:10:19.587570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.417 [2024-07-26 01:10:19.587583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.417 [2024-07-26 01:10:19.587598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.417 [2024-07-26 01:10:19.587611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.417 [2024-07-26 01:10:19.587625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.417 [2024-07-26 01:10:19.587638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.417 [2024-07-26 01:10:19.587653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.417 [2024-07-26 01:10:19.587666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.417 [2024-07-26 01:10:19.587681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.417 [2024-07-26 01:10:19.587712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.417 [2024-07-26 01:10:19.587731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.417 [2024-07-26 01:10:19.587746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.417 [2024-07-26 01:10:19.587761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.417 [2024-07-26 01:10:19.587775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.417 [2024-07-26 01:10:19.587791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.417 [2024-07-26 01:10:19.587804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.417 [2024-07-26 01:10:19.587820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.417 [2024-07-26 01:10:19.587833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.417 [2024-07-26 01:10:19.587848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.417 [2024-07-26 01:10:19.587862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.417 [2024-07-26 01:10:19.587877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.417 [2024-07-26 01:10:19.587890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.417 [2024-07-26 01:10:19.587906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.417 [2024-07-26 01:10:19.587919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.417 [2024-07-26 01:10:19.587934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.417 [2024-07-26 01:10:19.587948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.417 [2024-07-26 01:10:19.587963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.417 [2024-07-26 01:10:19.587976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.417 [2024-07-26 01:10:19.587991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.417 [2024-07-26 01:10:19.588004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.417 [2024-07-26 01:10:19.588019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.417 [2024-07-26 01:10:19.588032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.417 [2024-07-26 01:10:19.588053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.417 [2024-07-26 01:10:19.588074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.417 [2024-07-26 01:10:19.588093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.417 [2024-07-26 01:10:19.588109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.417 [2024-07-26 01:10:19.588124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.417 [2024-07-26 01:10:19.588138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.417 [2024-07-26 01:10:19.588153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.417 [2024-07-26 01:10:19.588166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.417 [2024-07-26 01:10:19.588214] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.417 [2024-07-26 01:10:19.588237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.417 [2024-07-26 01:10:19.588250] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.417 [2024-07-26 01:10:19.588253] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x66c0e0 was disconnected and freed. reset controller. 00:27:49.417 [2024-07-26 01:10:19.588263] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.417 [2024-07-26 01:10:19.588276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.417 [2024-07-26 01:10:19.588289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.417 [2024-07-26 01:10:19.588302] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.417 [2024-07-26 01:10:19.588314] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.417 [2024-07-26 01:10:19.588327] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.417 [2024-07-26 01:10:19.588339] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.417 [2024-07-26 01:10:19.588355] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.417 [2024-07-26 01:10:19.588367] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.417 [2024-07-26 01:10:19.588380] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.417 [2024-07-26 01:10:19.588392] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.417 [2024-07-26 01:10:19.588404] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.417 [2024-07-26 01:10:19.588431] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.417 [2024-07-26 01:10:19.588448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.417 [2024-07-26 01:10:19.588461] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.588496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.588512] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.588529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.588543] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.588555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.588568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.588580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.588592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.588604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.588616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.588628] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.588640] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.588672] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.588686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.588698] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.588709] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.588723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.588735] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.588747] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.588759] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.588771] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.588783] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.588795] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.588807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.588819] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.588831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.588843] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.588855] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.588868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.588884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.588897] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.588909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.588920] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.588932] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.588944] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.588957] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.588969] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.588981] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.588993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.589005] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.589017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.589030] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.589042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.589069] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.589082] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646a10 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.590434] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.418 [2024-07-26 01:10:19.590503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x668ee0 (9): Bad file descriptor 00:27:49.418 [2024-07-26 01:10:19.591040] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.591090] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.591106] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.591109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-26 01:10:19.591119] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with tid:0 cdw10:00000000 cdw11:00000000 00:27:49.418 he state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.591132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.591136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.418 [2024-07-26 01:10:19.591146] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.591152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-07-26 01:10:19.591159] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with tid:0 cdw10:00000000 cdw11:00000000 00:27:49.418 he state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.591175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with t[2024-07-26 01:10:19.591175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:27:49.418 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.418 [2024-07-26 01:10:19.591188] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.591192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.418 [2024-07-26 01:10:19.591201] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.591206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.418 [2024-07-26 01:10:19.591214] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.591220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.418 [2024-07-26 01:10:19.591226] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.591234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.418 [2024-07-26 01:10:19.591239] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.591248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3e910 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.591252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.591265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.591277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.591290] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.591302] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.591314] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.591317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.418 [2024-07-26 01:10:19.591326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.591339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-26 01:10:19.591339] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.418 he state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.591356] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.418 [2024-07-26 01:10:19.591359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.418 [2024-07-26 01:10:19.591368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.591373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.419 [2024-07-26 01:10:19.591380] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.591392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-07-26 01:10:19.591394] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with tid:0 cdw10:00000000 cdw11:00000000 00:27:49.419 he state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.591408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.591420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.419 [2024-07-26 01:10:19.591430] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.591435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.419 [2024-07-26 01:10:19.591443] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.591448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.419 [2024-07-26 01:10:19.591456] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.591461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98a0 is same with the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.591469] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.591481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.591493] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.591519] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.591532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.591532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.419 [2024-07-26 01:10:19.591550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.591553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.419 [2024-07-26 01:10:19.591565] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.591568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.419 [2024-07-26 01:10:19.591578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.591582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.419 [2024-07-26 01:10:19.591592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.591596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.419 [2024-07-26 01:10:19.591604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with t[2024-07-26 01:10:19.591609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:27:49.419 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.419 [2024-07-26 01:10:19.591631] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with t[2024-07-26 01:10:19.591632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nshe state(5) to be set 00:27:49.419 id:0 cdw10:00000000 cdw11:00000000 00:27:49.419 [2024-07-26 01:10:19.591646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with t[2024-07-26 01:10:19.591648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:27:49.419 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.419 [2024-07-26 01:10:19.591661] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with t[2024-07-26 01:10:19.591662] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa1940 is same whe state(5) to be set 00:27:49.419 ith the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.591676] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.591688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.591705] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.591719] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.591731] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.591743] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.591755] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.591767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.591779] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.591792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.591804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.591816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.591828] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.591840] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.591852] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.591864] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.591877] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.591888] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.591900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.591912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.591928] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646ed0 is same with the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.593223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.419 [2024-07-26 01:10:19.593253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x668ee0 with addr=10.0.0.2, port=4420 00:27:49.419 [2024-07-26 01:10:19.593269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x668ee0 is same with the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.593265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.593296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.593311] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.593324] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.419 [2024-07-26 01:10:19.593336] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.420 [2024-07-26 01:10:19.593352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.420 [2024-07-26 01:10:19.593365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.420 [2024-07-26 01:10:19.593376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.420 [2024-07-26 01:10:19.593389] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.420 [2024-07-26 01:10:19.593401] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.420 [2024-07-26 01:10:19.593413] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.420 [2024-07-26 01:10:19.593425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.420 [2024-07-26 01:10:19.593423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.420 [2024-07-26 01:10:19.593437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.420 [2024-07-26 01:10:19.593446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.420 [2024-07-26 01:10:19.593450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.420 [2024-07-26 01:10:19.593463] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.420 [2024-07-26 01:10:19.593469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.420 [2024-07-26 01:10:19.593475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.420 [2024-07-26 01:10:19.593484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.420 [2024-07-26 01:10:19.593487] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.420 [2024-07-26 01:10:19.593500] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with t[2024-07-26 01:10:19.593501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:12he state(5) to be set 00:27:49.420 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.420 [2024-07-26 01:10:19.593514] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.420 [2024-07-26 01:10:19.593521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.420 [2024-07-26 01:10:19.593526] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.420 [2024-07-26 01:10:19.593539] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with t[2024-07-26 01:10:19.593538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:12he state(5) to be set 00:27:49.420 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.420 [2024-07-26 01:10:19.593553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.420 [2024-07-26 01:10:19.593555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.420 [2024-07-26 01:10:19.593565] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.420 [2024-07-26 01:10:19.593572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.420 [2024-07-26 01:10:19.593578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.593586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.421 [2024-07-26 01:10:19.593590] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.593601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:12[2024-07-26 01:10:19.593603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.421 he state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.593616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-26 01:10:19.593616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.421 he state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.593631] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.593633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66d380 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.593645] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.593658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.593670] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.593682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.593695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.593704] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x66d380 was disconnected and fre[2024-07-26 01:10:19.593707] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with ted. reset controller. 00:27:49.421 he state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.593721] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.593733] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.593750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.593763] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.593775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.593788] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.593800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.593812] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.593824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.593837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.593849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.593861] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.593873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.593885] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.593902] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.593954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.593980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.594004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.594026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.594067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.594093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.594117] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.594139] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.594162] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with t[2024-07-26 01:10:19.594162] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:49.421 he state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.594188] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.594206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with t[2024-07-26 01:10:19.594203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x668ee0 (9): Bhe state(5) to be set 00:27:49.421 ad file descriptor 00:27:49.421 [2024-07-26 01:10:19.594222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.594235] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.594252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6473b0 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.595188] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.595219] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.595234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.595247] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.595259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.595271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.595283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.595296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.595309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.595321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.595338] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.595351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.595375] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.595388] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.595400] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.595412] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.595433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.595445] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.595460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.595473] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.595486] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.595498] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.595511] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.595523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.595528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controlle[2024-07-26 01:10:19.595535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with tr 00:27:49.421 he state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.595560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.595571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa1940 (9): B[2024-07-26 01:10:19.595574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with tad file descriptor 00:27:49.421 he state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.595589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.595594] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.421 [2024-07-26 01:10:19.595601] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.595608] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.421 [2024-07-26 01:10:19.595614] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.595624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.421 [2024-07-26 01:10:19.595627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.421 [2024-07-26 01:10:19.595639] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.595651] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.595664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.595676] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.595688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.595701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.595713] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.595725] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.595737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.595750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.595762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.595774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.595787] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.595799] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.595811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.595823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.595835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.595847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.595866] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.595879] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.595891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.595904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.595916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.595930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.595943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.595956] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.595972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.595985] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.595997] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.596012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.596025] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.596037] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647870 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.596149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.422 [2024-07-26 01:10:19.596494] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:49.422 [2024-07-26 01:10:19.596961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.422 [2024-07-26 01:10:19.596989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa1940 with addr=10.0.0.2, port=4420 00:27:49.422 [2024-07-26 01:10:19.597005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa1940 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597356] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa1940 (9): B[2024-07-26 01:10:19.597397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with tad file descriptor 00:27:49.422 he state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597412] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597438] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597451] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597463] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597468] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:49.422 [2024-07-26 01:10:19.597480] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597493] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597518] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597579] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597628] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597640] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597689] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597702] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597802] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597818] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597832] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597844] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597857] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597869] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597881] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597906] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.422 [2024-07-26 01:10:19.597955] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.423 [2024-07-26 01:10:19.597967] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.423 [2024-07-26 01:10:19.597979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.423 [2024-07-26 01:10:19.597992] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.423 [2024-07-26 01:10:19.598004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.423 [2024-07-26 01:10:19.598017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.423 [2024-07-26 01:10:19.598029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.423 [2024-07-26 01:10:19.598050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.423 [2024-07-26 01:10:19.598069] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.423 [2024-07-26 01:10:19.598084] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.423 [2024-07-26 01:10:19.598096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.423 [2024-07-26 01:10:19.598108] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.423 [2024-07-26 01:10:19.598121] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.423 [2024-07-26 01:10:19.598133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.423 [2024-07-26 01:10:19.598145] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.423 [2024-07-26 01:10:19.598158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.423 [2024-07-26 01:10:19.598173] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x647d50 is same with the state(5) to be set 00:27:49.423 [2024-07-26 01:10:19.599177] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:49.423 [2024-07-26 01:10:19.599202] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:49.423 [2024-07-26 01:10:19.599217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:49.423 [2024-07-26 01:10:19.599259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-07-26 01:10:19.599280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.423 [2024-07-26 01:10:19.599301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-07-26 01:10:19.599317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.423 [2024-07-26 01:10:19.599333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-07-26 01:10:19.599358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.423 [2024-07-26 01:10:19.599374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-07-26 01:10:19.599387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.423 [2024-07-26 01:10:19.599403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-07-26 01:10:19.599424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.423 [2024-07-26 01:10:19.599440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-07-26 01:10:19.599453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.423 [2024-07-26 01:10:19.599469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-07-26 01:10:19.599482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.423 [2024-07-26 01:10:19.599498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-07-26 01:10:19.599512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.423 [2024-07-26 01:10:19.599527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-07-26 01:10:19.599541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.423 [2024-07-26 01:10:19.599556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-07-26 01:10:19.599570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.423 [2024-07-26 01:10:19.599585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-07-26 01:10:19.599598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.423 [2024-07-26 01:10:19.599619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-07-26 01:10:19.599634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.423 [2024-07-26 01:10:19.599650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-07-26 01:10:19.599664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.423 [2024-07-26 01:10:19.599679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-07-26 01:10:19.599692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.423 [2024-07-26 01:10:19.599708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-07-26 01:10:19.599722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.423 [2024-07-26 01:10:19.599738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-07-26 01:10:19.599751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.423 [2024-07-26 01:10:19.599766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-07-26 01:10:19.599780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.423 [2024-07-26 01:10:19.599795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-07-26 01:10:19.599809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.423 [2024-07-26 01:10:19.599824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-07-26 01:10:19.599837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.423 [2024-07-26 01:10:19.599852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-07-26 01:10:19.599865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.423 [2024-07-26 01:10:19.599881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-07-26 01:10:19.599894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.423 [2024-07-26 01:10:19.599897] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.423 [2024-07-26 01:10:19.599910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-07-26 01:10:19.599924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.423 [2024-07-26 01:10:19.599925] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.423 [2024-07-26 01:10:19.599940] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with t[2024-07-26 01:10:19.599940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:12he state(5) to be set 00:27:49.423 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-07-26 01:10:19.599959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with t[2024-07-26 01:10:19.599960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:49.423 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.423 [2024-07-26 01:10:19.599974] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.423 [2024-07-26 01:10:19.599978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-07-26 01:10:19.599988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.423 [2024-07-26 01:10:19.599993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.423 [2024-07-26 01:10:19.600001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.423 [2024-07-26 01:10:19.600009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 [2024-07-26 01:10:19.600014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.423 [2024-07-26 01:10:19.600023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.423 [2024-07-26 01:10:19.600027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.423 [2024-07-26 01:10:19.600039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:12[2024-07-26 01:10:19.600040] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.423 he state(5) to be set 00:27:49.424 [2024-07-26 01:10:19.600055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.424 [2024-07-26 01:10:19.600055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.424 [2024-07-26 01:10:19.600080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.424 [2024-07-26 01:10:19.600083] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.424 [2024-07-26 01:10:19.600095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-26 01:10:19.600097] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.424 he state(5) to be set 00:27:49.424 [2024-07-26 01:10:19.600111] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.424 [2024-07-26 01:10:19.600113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.424 [2024-07-26 01:10:19.600123] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.424 [2024-07-26 01:10:19.600128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.424 [2024-07-26 01:10:19.600136] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.424 [2024-07-26 01:10:19.600145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:12[2024-07-26 01:10:19.600149] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.424 he state(5) to be set 00:27:49.424 [2024-07-26 01:10:19.600164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with t[2024-07-26 01:10:19.600164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:49.424 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.424 [2024-07-26 01:10:19.600178] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.424 [2024-07-26 01:10:19.600182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.424 [2024-07-26 01:10:19.600190] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.424 [2024-07-26 01:10:19.600196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.424 [2024-07-26 01:10:19.600203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.424 [2024-07-26 01:10:19.600212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.424 [2024-07-26 01:10:19.600216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.424 [2024-07-26 01:10:19.600226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.424 [2024-07-26 01:10:19.600229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.424 [2024-07-26 01:10:19.600242] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.424 [2024-07-26 01:10:19.600243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.424 [2024-07-26 01:10:19.600254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.424 [2024-07-26 01:10:19.600257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.424 [2024-07-26 01:10:19.600267] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.424 [2024-07-26 01:10:19.600274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.424 [2024-07-26 01:10:19.600280] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.424 [2024-07-26 01:10:19.600288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.424 [2024-07-26 01:10:19.600293] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.424 [2024-07-26 01:10:19.600304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:12[2024-07-26 01:10:19.600306] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.424 he state(5) to be set 00:27:49.424 [2024-07-26 01:10:19.600320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-26 01:10:19.600320] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.424 he state(5) to be set 00:27:49.424 [2024-07-26 01:10:19.600336] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with t[2024-07-26 01:10:19.600337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:12he state(5) to be set 00:27:49.424 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.424 [2024-07-26 01:10:19.600362] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with t[2024-07-26 01:10:19.600363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:49.424 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.424 [2024-07-26 01:10:19.600377] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.424 [2024-07-26 01:10:19.600381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.424 [2024-07-26 01:10:19.600395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-26 01:10:19.600395] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.424 he state(5) to be set 00:27:49.424 [2024-07-26 01:10:19.600410] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.424 [2024-07-26 01:10:19.600412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.424 [2024-07-26 01:10:19.600423] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.424 [2024-07-26 01:10:19.600426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.424 [2024-07-26 01:10:19.600436] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.424 [2024-07-26 01:10:19.600442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.424 [2024-07-26 01:10:19.600449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.424 [2024-07-26 01:10:19.600455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.424 [2024-07-26 01:10:19.600461] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.424 [2024-07-26 01:10:19.600471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:12[2024-07-26 01:10:19.600474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.424 he state(5) to be set 00:27:49.424 [2024-07-26 01:10:19.600486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-26 01:10:19.600487] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.424 he state(5) to be set 00:27:49.424 [2024-07-26 01:10:19.600501] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.424 [2024-07-26 01:10:19.600504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.424 [2024-07-26 01:10:19.600514] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.424 [2024-07-26 01:10:19.600517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.424 [2024-07-26 01:10:19.600527] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.424 [2024-07-26 01:10:19.600533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.424 [2024-07-26 01:10:19.600540] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.424 [2024-07-26 01:10:19.600550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.424 [2024-07-26 01:10:19.600553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.424 [2024-07-26 01:10:19.600566] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.424 [2024-07-26 01:10:19.600568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.424 [2024-07-26 01:10:19.600579] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.424 [2024-07-26 01:10:19.600582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.424 [2024-07-26 01:10:19.600591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.424 [2024-07-26 01:10:19.600598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.424 [2024-07-26 01:10:19.600604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.424 [2024-07-26 01:10:19.600612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.424 [2024-07-26 01:10:19.600617] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.424 [2024-07-26 01:10:19.600628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:12[2024-07-26 01:10:19.600629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.424 he state(5) to be set 00:27:49.424 [2024-07-26 01:10:19.600644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with t[2024-07-26 01:10:19.600644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:49.424 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.424 [2024-07-26 01:10:19.600658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.425 [2024-07-26 01:10:19.600662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.425 [2024-07-26 01:10:19.600670] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.425 [2024-07-26 01:10:19.600677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.425 [2024-07-26 01:10:19.600683] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.425 [2024-07-26 01:10:19.600693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.425 [2024-07-26 01:10:19.600695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.425 [2024-07-26 01:10:19.600707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-26 01:10:19.600708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.425 he state(5) to be set 00:27:49.425 [2024-07-26 01:10:19.600722] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.425 [2024-07-26 01:10:19.600725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.425 [2024-07-26 01:10:19.600734] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.425 [2024-07-26 01:10:19.600743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.425 [2024-07-26 01:10:19.600747] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with the state(5) to be set 00:27:49.425 [2024-07-26 01:10:19.600759] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with t[2024-07-26 01:10:19.600759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:12he state(5) to be set 00:27:49.425 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.425 [2024-07-26 01:10:19.600773] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6486d0 is same with t[2024-07-26 01:10:19.600775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:49.425 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.425 [2024-07-26 01:10:19.600792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.425 [2024-07-26 01:10:19.600806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.425 [2024-07-26 01:10:19.600821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.425 [2024-07-26 01:10:19.600834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.425 [2024-07-26 01:10:19.600850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.425 [2024-07-26 01:10:19.600863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.425 [2024-07-26 01:10:19.600878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.425 [2024-07-26 01:10:19.600891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.425 [2024-07-26 01:10:19.600907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.425 [2024-07-26 01:10:19.600920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.425 [2024-07-26 01:10:19.600935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.425 [2024-07-26 01:10:19.600948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.425 [2024-07-26 01:10:19.600964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.425 [2024-07-26 01:10:19.600977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.425 [2024-07-26 01:10:19.600992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.425 [2024-07-26 01:10:19.601005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.425 [2024-07-26 01:10:19.601021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.425 [2024-07-26 01:10:19.601034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.425 [2024-07-26 01:10:19.601069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.425 [2024-07-26 01:10:19.601084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.425 [2024-07-26 01:10:19.601100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.425 [2024-07-26 01:10:19.601114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.425 [2024-07-26 01:10:19.601130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.425 [2024-07-26 01:10:19.601143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.425 [2024-07-26 01:10:19.601159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.425 [2024-07-26 01:10:19.601172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.425 [2024-07-26 01:10:19.601187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.425 [2024-07-26 01:10:19.601201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.425 [2024-07-26 01:10:19.601216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.425 [2024-07-26 01:10:19.601229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.425 [2024-07-26 01:10:19.601244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.425 [2024-07-26 01:10:19.601258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.425 [2024-07-26 01:10:19.601271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d390 is same with the state(5) to be set 00:27:49.425 [2024-07-26 01:10:19.601365] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc1d390 was disconnected and freed. reset controller. 00:27:49.425 [2024-07-26 01:10:19.601491] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.425 [2024-07-26 01:10:19.601524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.425 [2024-07-26 01:10:19.601540] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.425 [2024-07-26 01:10:19.601553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.425 [2024-07-26 01:10:19.601565] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.425 [2024-07-26 01:10:19.601578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.425 [2024-07-26 01:10:19.601591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.425 [2024-07-26 01:10:19.601604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with t[2024-07-26 01:10:19.601599] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc004a0 was disconnected and frehe state(5) to be set 00:27:49.425 ed. reset controller. 00:27:49.425 [2024-07-26 01:10:19.601620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.425 [2024-07-26 01:10:19.601637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.601651] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.601663] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.601676] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.601688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.601701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.601713] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.601726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.601738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.601751] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.601748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.426 [2024-07-26 01:10:19.601764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.601776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.601789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.601802] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.601803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc3e910 (9): Bad file descriptor 00:27:49.426 [2024-07-26 01:10:19.601814] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.601827] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.601840] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.601853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.601865] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with t[2024-07-26 01:10:19.601862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nshe state(5) to be set 00:27:49.426 id:0 cdw10:00000000 cdw11:00000000 00:27:49.426 [2024-07-26 01:10:19.601880] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.601884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.426 [2024-07-26 01:10:19.601894] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.601900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.426 [2024-07-26 01:10:19.601907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.601914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.426 [2024-07-26 01:10:19.601922] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.601929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.426 [2024-07-26 01:10:19.601935] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.601943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.426 [2024-07-26 01:10:19.601949] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.601958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.426 [2024-07-26 01:10:19.601962] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.601971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.426 [2024-07-26 01:10:19.601975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.601985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x597610 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.601988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.602001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.602013] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.602026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.602025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.426 [2024-07-26 01:10:19.602038] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.602056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with t[2024-07-26 01:10:19.602057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:27:49.426 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.426 [2024-07-26 01:10:19.602078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.602082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.426 [2024-07-26 01:10:19.602092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.602096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.426 [2024-07-26 01:10:19.602105] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.602111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.426 [2024-07-26 01:10:19.602118] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.602125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.426 [2024-07-26 01:10:19.602131] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.602144] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with t[2024-07-26 01:10:19.602145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nshe state(5) to be set 00:27:49.426 id:0 cdw10:00000000 cdw11:00000000 00:27:49.426 [2024-07-26 01:10:19.602158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.602160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.426 [2024-07-26 01:10:19.602171] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.602174] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac3b0 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.602184] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.602197] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.602204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa98a0 (9): Bad file descriptor 00:27:49.426 [2024-07-26 01:10:19.602210] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.602222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.602235] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.602247] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.602252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.426 [2024-07-26 01:10:19.602259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.602272] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with t[2024-07-26 01:10:19.602273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:27:49.426 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.426 [2024-07-26 01:10:19.602286] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.602290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.426 [2024-07-26 01:10:19.602299] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.602304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.426 [2024-07-26 01:10:19.602312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.602319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.426 [2024-07-26 01:10:19.602325] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.602332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.426 [2024-07-26 01:10:19.602337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b90 is same with the state(5) to be set 00:27:49.426 [2024-07-26 01:10:19.602347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.426 [2024-07-26 01:10:19.602369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.427 [2024-07-26 01:10:19.602382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad280 is same with the state(5) to be set 00:27:49.427 [2024-07-26 01:10:19.602439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.427 [2024-07-26 01:10:19.602460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.427 [2024-07-26 01:10:19.602474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.427 [2024-07-26 01:10:19.602488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.427 [2024-07-26 01:10:19.602502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.427 [2024-07-26 01:10:19.602516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.427 [2024-07-26 01:10:19.602530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.427 [2024-07-26 01:10:19.602543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.427 [2024-07-26 01:10:19.602555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacc40 is same with the state(5) to be set 00:27:49.427 [2024-07-26 01:10:19.602597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.427 [2024-07-26 01:10:19.602618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.427 [2024-07-26 01:10:19.602633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.427 [2024-07-26 01:10:19.602646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.427 [2024-07-26 01:10:19.602661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.427 [2024-07-26 01:10:19.602674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.427 [2024-07-26 01:10:19.602688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.427 [2024-07-26 01:10:19.602701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.427 [2024-07-26 01:10:19.602714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66f490 is same with the state(5) to be set 00:27:49.427 [2024-07-26 01:10:19.604428] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:49.427 [2024-07-26 01:10:19.604461] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:49.427 [2024-07-26 01:10:19.604484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x597610 (9): Bad file descriptor 00:27:49.427 [2024-07-26 01:10:19.604508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaad280 (9): Bad file descriptor 00:27:49.427 [2024-07-26 01:10:19.604580] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:49.427 [2024-07-26 01:10:19.604792] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.427 [2024-07-26 01:10:19.605486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.427 [2024-07-26 01:10:19.605515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaad280 with addr=10.0.0.2, port=4420 00:27:49.427 [2024-07-26 01:10:19.605532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad280 is same with the state(5) to be set 00:27:49.427 [2024-07-26 01:10:19.605644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.427 [2024-07-26 01:10:19.605670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x597610 with addr=10.0.0.2, port=4420 00:27:49.427 [2024-07-26 01:10:19.605685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x597610 is same with the state(5) to be set 00:27:49.427 [2024-07-26 01:10:19.605786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.427 [2024-07-26 01:10:19.605811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x668ee0 with addr=10.0.0.2, port=4420 00:27:49.427 [2024-07-26 01:10:19.605827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x668ee0 is same with the state(5) to be set 00:27:49.427 [2024-07-26 01:10:19.605909] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:49.427 [2024-07-26 01:10:19.606093] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:49.427 [2024-07-26 01:10:19.606188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaad280 (9): Bad file descriptor 00:27:49.427 [2024-07-26 01:10:19.606215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x597610 (9): Bad file descriptor 00:27:49.427 [2024-07-26 01:10:19.606233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x668ee0 (9): Bad file descriptor 00:27:49.427 [2024-07-26 01:10:19.606319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.427 [2024-07-26 01:10:19.606342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.427 [2024-07-26 01:10:19.606373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.427 [2024-07-26 01:10:19.606388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.427 [2024-07-26 01:10:19.606405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.427 [2024-07-26 01:10:19.606428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.427 [2024-07-26 01:10:19.606444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.427 [2024-07-26 01:10:19.606458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.427 [2024-07-26 01:10:19.606474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.427 [2024-07-26 01:10:19.606487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.427 [2024-07-26 01:10:19.606503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.427 [2024-07-26 01:10:19.606517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.427 [2024-07-26 01:10:19.606532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.427 [2024-07-26 01:10:19.606551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.427 [2024-07-26 01:10:19.606568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.427 [2024-07-26 01:10:19.606582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.427 [2024-07-26 01:10:19.606597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.427 [2024-07-26 01:10:19.606611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.427 [2024-07-26 01:10:19.606626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.427 [2024-07-26 01:10:19.606640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.427 [2024-07-26 01:10:19.606655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.427 [2024-07-26 01:10:19.606669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.427 [2024-07-26 01:10:19.606685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.427 [2024-07-26 01:10:19.606698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.427 [2024-07-26 01:10:19.606714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.427 [2024-07-26 01:10:19.606727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.427 [2024-07-26 01:10:19.606743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.427 [2024-07-26 01:10:19.606756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.427 [2024-07-26 01:10:19.606772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.427 [2024-07-26 01:10:19.606786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.427 [2024-07-26 01:10:19.606802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.427 [2024-07-26 01:10:19.606816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.427 [2024-07-26 01:10:19.606832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.427 [2024-07-26 01:10:19.606845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.427 [2024-07-26 01:10:19.606861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.427 [2024-07-26 01:10:19.606875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.427 [2024-07-26 01:10:19.606890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.427 [2024-07-26 01:10:19.606903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.427 [2024-07-26 01:10:19.606925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.427 [2024-07-26 01:10:19.606939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.427 [2024-07-26 01:10:19.606955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.427 [2024-07-26 01:10:19.606968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.427 [2024-07-26 01:10:19.606984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.428 [2024-07-26 01:10:19.606997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.428 [2024-07-26 01:10:19.607013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.428 [2024-07-26 01:10:19.607026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.428 [2024-07-26 01:10:19.607053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.428 [2024-07-26 01:10:19.607081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.428 [2024-07-26 01:10:19.607098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.428 [2024-07-26 01:10:19.607113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.428 [2024-07-26 01:10:19.607128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.428 [2024-07-26 01:10:19.607142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.428 [2024-07-26 01:10:19.607157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.428 [2024-07-26 01:10:19.607171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.428 [2024-07-26 01:10:19.607187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.428 [2024-07-26 01:10:19.607201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.428 [2024-07-26 01:10:19.607216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.428 [2024-07-26 01:10:19.607230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.428 [2024-07-26 01:10:19.607245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.428 [2024-07-26 01:10:19.607259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.428 [2024-07-26 01:10:19.607274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.428 [2024-07-26 01:10:19.607287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.428 [2024-07-26 01:10:19.607303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.428 [2024-07-26 01:10:19.607321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.428 [2024-07-26 01:10:19.607338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.428 [2024-07-26 01:10:19.607356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.428 [2024-07-26 01:10:19.607371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.428 [2024-07-26 01:10:19.607385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.428 [2024-07-26 01:10:19.607400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.428 [2024-07-26 01:10:19.607413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.428 [2024-07-26 01:10:19.607429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.428 [2024-07-26 01:10:19.607442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.428 [2024-07-26 01:10:19.607458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.428 [2024-07-26 01:10:19.607472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.428 [2024-07-26 01:10:19.607487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.428 [2024-07-26 01:10:19.607501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.428 [2024-07-26 01:10:19.607516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.428 [2024-07-26 01:10:19.607530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.428 [2024-07-26 01:10:19.607546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.428 [2024-07-26 01:10:19.607559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.428 [2024-07-26 01:10:19.607575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.428 [2024-07-26 01:10:19.607588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.428 [2024-07-26 01:10:19.607604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.428 [2024-07-26 01:10:19.607617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.428 [2024-07-26 01:10:19.607633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.428 [2024-07-26 01:10:19.607646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.428 [2024-07-26 01:10:19.607661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.428 [2024-07-26 01:10:19.607675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.428 [2024-07-26 01:10:19.607694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.428 [2024-07-26 01:10:19.607708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.428 [2024-07-26 01:10:19.607723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.428 [2024-07-26 01:10:19.607736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.428 [2024-07-26 01:10:19.607752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.428 [2024-07-26 01:10:19.607766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.428 [2024-07-26 01:10:19.607781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.428 [2024-07-26 01:10:19.607795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.428 [2024-07-26 01:10:19.607810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.428 [2024-07-26 01:10:19.607823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.428 [2024-07-26 01:10:19.607839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.428 [2024-07-26 01:10:19.607853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.428 [2024-07-26 01:10:19.607868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.428 [2024-07-26 01:10:19.607881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.428 [2024-07-26 01:10:19.607897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.428 [2024-07-26 01:10:19.607910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.428 [2024-07-26 01:10:19.607926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.428 [2024-07-26 01:10:19.607939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.428 [2024-07-26 01:10:19.607954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.428 [2024-07-26 01:10:19.607967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.428 [2024-07-26 01:10:19.607982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.428 [2024-07-26 01:10:19.607996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.428 [2024-07-26 01:10:19.608011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.428 [2024-07-26 01:10:19.608030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.428 [2024-07-26 01:10:19.608055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.428 [2024-07-26 01:10:19.608081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.428 [2024-07-26 01:10:19.608098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.428 [2024-07-26 01:10:19.608112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.428 [2024-07-26 01:10:19.608128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.428 [2024-07-26 01:10:19.608141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.428 [2024-07-26 01:10:19.608156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.429 [2024-07-26 01:10:19.608170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.429 [2024-07-26 01:10:19.608185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.429 [2024-07-26 01:10:19.608198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.429 [2024-07-26 01:10:19.608213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.429 [2024-07-26 01:10:19.608227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.429 [2024-07-26 01:10:19.608242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.429 [2024-07-26 01:10:19.608256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.429 [2024-07-26 01:10:19.608271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.429 [2024-07-26 01:10:19.608284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.429 [2024-07-26 01:10:19.608298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc01970 is same with the state(5) to be set 00:27:49.429 [2024-07-26 01:10:19.608400] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc01970 was disconnected and freed. reset controller. 00:27:49.429 [2024-07-26 01:10:19.608470] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:49.429 [2024-07-26 01:10:19.608511] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:49.429 [2024-07-26 01:10:19.608529] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:49.429 [2024-07-26 01:10:19.608544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:49.429 [2024-07-26 01:10:19.608562] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:49.429 [2024-07-26 01:10:19.608575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:49.429 [2024-07-26 01:10:19.608588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:49.429 [2024-07-26 01:10:19.608604] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.429 [2024-07-26 01:10:19.608617] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.429 [2024-07-26 01:10:19.608630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.429 [2024-07-26 01:10:19.609838] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.429 [2024-07-26 01:10:19.609862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.429 [2024-07-26 01:10:19.609875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.429 [2024-07-26 01:10:19.609895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:49.429 [2024-07-26 01:10:19.609922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x66f490 (9): Bad file descriptor 00:27:49.429 [2024-07-26 01:10:19.609990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:49.429 [2024-07-26 01:10:19.610486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.429 [2024-07-26 01:10:19.610514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x66f490 with addr=10.0.0.2, port=4420 00:27:49.429 [2024-07-26 01:10:19.610530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66f490 is same with the state(5) to be set 00:27:49.429 [2024-07-26 01:10:19.610633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.429 [2024-07-26 01:10:19.610659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa1940 with addr=10.0.0.2, port=4420 00:27:49.429 [2024-07-26 01:10:19.610674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa1940 is same with the state(5) to be set 00:27:49.429 [2024-07-26 01:10:19.610753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x66f490 (9): Bad file descriptor 00:27:49.429 [2024-07-26 01:10:19.610779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa1940 (9): Bad file descriptor 00:27:49.429 [2024-07-26 01:10:19.610837] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:49.429 [2024-07-26 01:10:19.610855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:49.429 [2024-07-26 01:10:19.610869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:49.429 [2024-07-26 01:10:19.610888] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:49.429 [2024-07-26 01:10:19.610901] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:49.429 [2024-07-26 01:10:19.610914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:49.429 [2024-07-26 01:10:19.610960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.429 [2024-07-26 01:10:19.610977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.429 [2024-07-26 01:10:19.611797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaac3b0 (9): Bad file descriptor 00:27:49.429 [2024-07-26 01:10:19.611868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.429 [2024-07-26 01:10:19.611890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.429 [2024-07-26 01:10:19.611906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.429 [2024-07-26 01:10:19.611919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.429 [2024-07-26 01:10:19.611933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.429 [2024-07-26 01:10:19.611946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.429 [2024-07-26 01:10:19.611961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.429 [2024-07-26 01:10:19.611980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.429 [2024-07-26 01:10:19.611994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc412b0 is same with the state(5) to be set 00:27:49.429 [2024-07-26 01:10:19.612020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaacc40 (9): Bad file descriptor 00:27:49.429 [2024-07-26 01:10:19.612163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.429 [2024-07-26 01:10:19.612185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.429 [2024-07-26 01:10:19.612208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.429 [2024-07-26 01:10:19.612224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.429 [2024-07-26 01:10:19.612240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.429 [2024-07-26 01:10:19.612255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.429 [2024-07-26 01:10:19.612270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.429 [2024-07-26 01:10:19.612284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.429 [2024-07-26 01:10:19.612300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.429 [2024-07-26 01:10:19.612313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.429 [2024-07-26 01:10:19.612329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.429 [2024-07-26 01:10:19.612351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.429 [2024-07-26 01:10:19.612367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.429 [2024-07-26 01:10:19.612380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.429 [2024-07-26 01:10:19.612396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.429 [2024-07-26 01:10:19.612410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.429 [2024-07-26 01:10:19.612425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.430 [2024-07-26 01:10:19.612439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.430 [2024-07-26 01:10:19.612454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.430 [2024-07-26 01:10:19.612468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.430 [2024-07-26 01:10:19.612484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.430 [2024-07-26 01:10:19.612497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.430 [2024-07-26 01:10:19.612518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.430 [2024-07-26 01:10:19.612532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.430 [2024-07-26 01:10:19.612548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.430 [2024-07-26 01:10:19.612562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.430 [2024-07-26 01:10:19.612578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.430 [2024-07-26 01:10:19.612591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.430 [2024-07-26 01:10:19.612607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.430 [2024-07-26 01:10:19.612621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.430 [2024-07-26 01:10:19.612637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.430 [2024-07-26 01:10:19.612651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.430 [2024-07-26 01:10:19.612667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.430 [2024-07-26 01:10:19.612680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.430 [2024-07-26 01:10:19.612696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.430 [2024-07-26 01:10:19.612710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.430 [2024-07-26 01:10:19.612726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.430 [2024-07-26 01:10:19.612739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.430 [2024-07-26 01:10:19.612755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.430 [2024-07-26 01:10:19.612769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.430 [2024-07-26 01:10:19.612784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.430 [2024-07-26 01:10:19.612798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.430 [2024-07-26 01:10:19.612814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.430 [2024-07-26 01:10:19.612828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.430 [2024-07-26 01:10:19.612843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.430 [2024-07-26 01:10:19.612857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.430 [2024-07-26 01:10:19.612872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.430 [2024-07-26 01:10:19.612889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.430 [2024-07-26 01:10:19.612906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.430 [2024-07-26 01:10:19.612920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.430 [2024-07-26 01:10:19.612936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.430 [2024-07-26 01:10:19.612949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.430 [2024-07-26 01:10:19.612966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.430 [2024-07-26 01:10:19.612980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.430 [2024-07-26 01:10:19.612996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.430 [2024-07-26 01:10:19.613010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.430 [2024-07-26 01:10:19.613026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.430 [2024-07-26 01:10:19.613039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.430 [2024-07-26 01:10:19.613073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.430 [2024-07-26 01:10:19.613089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.430 [2024-07-26 01:10:19.613105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.430 [2024-07-26 01:10:19.613119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.430 [2024-07-26 01:10:19.613137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.430 [2024-07-26 01:10:19.613151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.430 [2024-07-26 01:10:19.613167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.430 [2024-07-26 01:10:19.613181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.430 [2024-07-26 01:10:19.613198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.430 [2024-07-26 01:10:19.613212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.430 [2024-07-26 01:10:19.613228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.430 [2024-07-26 01:10:19.613242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.430 [2024-07-26 01:10:19.613258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.430 [2024-07-26 01:10:19.613272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.430 [2024-07-26 01:10:19.613292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.430 [2024-07-26 01:10:19.613307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.430 [2024-07-26 01:10:19.613322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.430 [2024-07-26 01:10:19.613336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.430 [2024-07-26 01:10:19.613362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.430 [2024-07-26 01:10:19.613376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.430 [2024-07-26 01:10:19.613392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.430 [2024-07-26 01:10:19.613406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.430 [2024-07-26 01:10:19.613422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.430 [2024-07-26 01:10:19.613435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.430 [2024-07-26 01:10:19.613451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.430 [2024-07-26 01:10:19.613465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.430 [2024-07-26 01:10:19.613481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.430 [2024-07-26 01:10:19.613495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.430 [2024-07-26 01:10:19.613510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.430 [2024-07-26 01:10:19.613524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.430 [2024-07-26 01:10:19.613540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.430 [2024-07-26 01:10:19.613554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.430 [2024-07-26 01:10:19.613570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.430 [2024-07-26 01:10:19.613583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.430 [2024-07-26 01:10:19.613599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.430 [2024-07-26 01:10:19.613613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.430 [2024-07-26 01:10:19.613628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.430 [2024-07-26 01:10:19.613642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.431 [2024-07-26 01:10:19.613658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.431 [2024-07-26 01:10:19.613675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.431 [2024-07-26 01:10:19.613694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.431 [2024-07-26 01:10:19.613708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.431 [2024-07-26 01:10:19.613724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.431 [2024-07-26 01:10:19.613737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.431 [2024-07-26 01:10:19.613753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.431 [2024-07-26 01:10:19.613767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.431 [2024-07-26 01:10:19.613782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.431 [2024-07-26 01:10:19.613796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.431 [2024-07-26 01:10:19.613812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.431 [2024-07-26 01:10:19.613825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.431 [2024-07-26 01:10:19.613841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.431 [2024-07-26 01:10:19.613855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.431 [2024-07-26 01:10:19.613871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.431 [2024-07-26 01:10:19.613885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.431 [2024-07-26 01:10:19.613901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.431 [2024-07-26 01:10:19.613915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.431 [2024-07-26 01:10:19.613931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.431 [2024-07-26 01:10:19.613945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.431 [2024-07-26 01:10:19.613961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.431 [2024-07-26 01:10:19.613974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.431 [2024-07-26 01:10:19.613990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.431 [2024-07-26 01:10:19.614003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.431 [2024-07-26 01:10:19.614020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.431 [2024-07-26 01:10:19.614033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.431 [2024-07-26 01:10:19.614065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.431 [2024-07-26 01:10:19.614082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.431 [2024-07-26 01:10:19.614098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.431 [2024-07-26 01:10:19.614112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.431 [2024-07-26 01:10:19.614128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.431 [2024-07-26 01:10:19.614142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.431 [2024-07-26 01:10:19.614156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1be50 is same with the state(5) to be set 00:27:49.431 [2024-07-26 01:10:19.615479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.431 [2024-07-26 01:10:19.615504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.431 [2024-07-26 01:10:19.615527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.431 [2024-07-26 01:10:19.615544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.431 [2024-07-26 01:10:19.615560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.431 [2024-07-26 01:10:19.615574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.431 [2024-07-26 01:10:19.615590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.431 [2024-07-26 01:10:19.615603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.431 [2024-07-26 01:10:19.615619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.431 [2024-07-26 01:10:19.615633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.431 [2024-07-26 01:10:19.615649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.431 [2024-07-26 01:10:19.615662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.431 [2024-07-26 01:10:19.615678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.431 [2024-07-26 01:10:19.615692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.431 [2024-07-26 01:10:19.615708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.431 [2024-07-26 01:10:19.615721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.431 [2024-07-26 01:10:19.615737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.431 [2024-07-26 01:10:19.615751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.431 [2024-07-26 01:10:19.615771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.431 [2024-07-26 01:10:19.615786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.431 [2024-07-26 01:10:19.615802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.431 [2024-07-26 01:10:19.615816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.431 [2024-07-26 01:10:19.615831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.431 [2024-07-26 01:10:19.615845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.431 [2024-07-26 01:10:19.615860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.431 [2024-07-26 01:10:19.615874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.431 [2024-07-26 01:10:19.615890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.431 [2024-07-26 01:10:19.615903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.431 [2024-07-26 01:10:19.615919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.431 [2024-07-26 01:10:19.615933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.431 [2024-07-26 01:10:19.615949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.431 [2024-07-26 01:10:19.615963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.431 [2024-07-26 01:10:19.615979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.431 [2024-07-26 01:10:19.615992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.431 [2024-07-26 01:10:19.616008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.431 [2024-07-26 01:10:19.616022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.431 [2024-07-26 01:10:19.616039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.431 [2024-07-26 01:10:19.616071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.431 [2024-07-26 01:10:19.616089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.431 [2024-07-26 01:10:19.616103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.431 [2024-07-26 01:10:19.616119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.431 [2024-07-26 01:10:19.616132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.431 [2024-07-26 01:10:19.616148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.431 [2024-07-26 01:10:19.616166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.431 [2024-07-26 01:10:19.616183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.432 [2024-07-26 01:10:19.616196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.432 [2024-07-26 01:10:19.616212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.432 [2024-07-26 01:10:19.616226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.432 [2024-07-26 01:10:19.616242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.432 [2024-07-26 01:10:19.616256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.432 [2024-07-26 01:10:19.616272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.432 [2024-07-26 01:10:19.616285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.432 [2024-07-26 01:10:19.616301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.432 [2024-07-26 01:10:19.616315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.432 [2024-07-26 01:10:19.616331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.432 [2024-07-26 01:10:19.616350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.432 [2024-07-26 01:10:19.616366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.432 [2024-07-26 01:10:19.616380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.432 [2024-07-26 01:10:19.616396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.432 [2024-07-26 01:10:19.616410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.432 [2024-07-26 01:10:19.616425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.432 [2024-07-26 01:10:19.616439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.432 [2024-07-26 01:10:19.616455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.432 [2024-07-26 01:10:19.616468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.432 [2024-07-26 01:10:19.616485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.432 [2024-07-26 01:10:19.616499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.432 [2024-07-26 01:10:19.616515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.432 [2024-07-26 01:10:19.616530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.432 [2024-07-26 01:10:19.616550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.432 [2024-07-26 01:10:19.616564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.432 [2024-07-26 01:10:19.616581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.432 [2024-07-26 01:10:19.616595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.432 [2024-07-26 01:10:19.616611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.432 [2024-07-26 01:10:19.616625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.432 [2024-07-26 01:10:19.616641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.432 [2024-07-26 01:10:19.616655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.432 [2024-07-26 01:10:19.616671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.432 [2024-07-26 01:10:19.616684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.432 [2024-07-26 01:10:19.616700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.432 [2024-07-26 01:10:19.616714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.432 [2024-07-26 01:10:19.616731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.432 [2024-07-26 01:10:19.616745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.432 [2024-07-26 01:10:19.616761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.432 [2024-07-26 01:10:19.616776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.432 [2024-07-26 01:10:19.616791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.432 [2024-07-26 01:10:19.616805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.432 [2024-07-26 01:10:19.616821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.432 [2024-07-26 01:10:19.616835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.432 [2024-07-26 01:10:19.616851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.432 [2024-07-26 01:10:19.616865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.432 [2024-07-26 01:10:19.616881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.432 [2024-07-26 01:10:19.616895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.432 [2024-07-26 01:10:19.616910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.432 [2024-07-26 01:10:19.616927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.432 [2024-07-26 01:10:19.616944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.432 [2024-07-26 01:10:19.616957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.432 [2024-07-26 01:10:19.616975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.432 [2024-07-26 01:10:19.616989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.432 [2024-07-26 01:10:19.617006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.432 [2024-07-26 01:10:19.617019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.432 [2024-07-26 01:10:19.617035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.432 [2024-07-26 01:10:19.617055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.432 [2024-07-26 01:10:19.617077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.432 [2024-07-26 01:10:19.617092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.432 [2024-07-26 01:10:19.617108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.432 [2024-07-26 01:10:19.617121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.432 [2024-07-26 01:10:19.617137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.432 [2024-07-26 01:10:19.617151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.432 [2024-07-26 01:10:19.617167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.432 [2024-07-26 01:10:19.617181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.432 [2024-07-26 01:10:19.617197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.432 [2024-07-26 01:10:19.617211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.432 [2024-07-26 01:10:19.617227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.432 [2024-07-26 01:10:19.617241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.432 [2024-07-26 01:10:19.617256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.432 [2024-07-26 01:10:19.617269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.432 [2024-07-26 01:10:19.617285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.432 [2024-07-26 01:10:19.617299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.432 [2024-07-26 01:10:19.617319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.432 [2024-07-26 01:10:19.617334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.432 [2024-07-26 01:10:19.617360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.432 [2024-07-26 01:10:19.617374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.432 [2024-07-26 01:10:19.617390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.433 [2024-07-26 01:10:19.617404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.433 [2024-07-26 01:10:19.617420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.433 [2024-07-26 01:10:19.617433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.433 [2024-07-26 01:10:19.617449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.433 [2024-07-26 01:10:19.617462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.433 [2024-07-26 01:10:19.617478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b200 is same with the state(5) to be set 00:27:49.433 [2024-07-26 01:10:19.619649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:49.433 [2024-07-26 01:10:19.619682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:49.433 [2024-07-26 01:10:19.620065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.433 [2024-07-26 01:10:19.620096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa98a0 with addr=10.0.0.2, port=4420 00:27:49.433 [2024-07-26 01:10:19.620113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98a0 is same with the state(5) to be set 00:27:49.433 [2024-07-26 01:10:19.620214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.433 [2024-07-26 01:10:19.620239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc3e910 with addr=10.0.0.2, port=4420 00:27:49.433 [2024-07-26 01:10:19.620254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3e910 is same with the state(5) to be set 00:27:49.433 [2024-07-26 01:10:19.620805] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.433 [2024-07-26 01:10:19.620830] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:49.433 [2024-07-26 01:10:19.620848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:49.433 [2024-07-26 01:10:19.620895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa98a0 (9): Bad file descriptor 00:27:49.433 [2024-07-26 01:10:19.620919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc3e910 (9): Bad file descriptor 00:27:49.433 [2024-07-26 01:10:19.621119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.433 [2024-07-26 01:10:19.621147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x668ee0 with addr=10.0.0.2, port=4420 00:27:49.433 [2024-07-26 01:10:19.621163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x668ee0 is same with the state(5) to be set 00:27:49.433 [2024-07-26 01:10:19.621258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.433 [2024-07-26 01:10:19.621283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x597610 with addr=10.0.0.2, port=4420 00:27:49.433 [2024-07-26 01:10:19.621304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x597610 is same with the state(5) to be set 00:27:49.433 [2024-07-26 01:10:19.621412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.433 [2024-07-26 01:10:19.621437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaad280 with addr=10.0.0.2, port=4420 00:27:49.433 [2024-07-26 01:10:19.621452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad280 is same with the state(5) to be set 00:27:49.433 [2024-07-26 01:10:19.621467] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:49.433 [2024-07-26 01:10:19.621480] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:49.433 [2024-07-26 01:10:19.621496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:49.433 [2024-07-26 01:10:19.621516] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:49.433 [2024-07-26 01:10:19.621529] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:49.433 [2024-07-26 01:10:19.621543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:49.433 [2024-07-26 01:10:19.621604] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:49.433 [2024-07-26 01:10:19.621627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:49.433 [2024-07-26 01:10:19.621645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.433 [2024-07-26 01:10:19.621660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.433 [2024-07-26 01:10:19.621692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x668ee0 (9): Bad file descriptor 00:27:49.433 [2024-07-26 01:10:19.621715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x597610 (9): Bad file descriptor 00:27:49.433 [2024-07-26 01:10:19.621734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaad280 (9): Bad file descriptor 00:27:49.433 [2024-07-26 01:10:19.621900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.433 [2024-07-26 01:10:19.621926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa1940 with addr=10.0.0.2, port=4420 00:27:49.433 [2024-07-26 01:10:19.621942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa1940 is same with the state(5) to be set 00:27:49.433 [2024-07-26 01:10:19.622048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.433 [2024-07-26 01:10:19.622079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x66f490 with addr=10.0.0.2, port=4420 00:27:49.433 [2024-07-26 01:10:19.622096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66f490 is same with the state(5) to be set 00:27:49.433 [2024-07-26 01:10:19.622110] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.433 [2024-07-26 01:10:19.622123] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.433 [2024-07-26 01:10:19.622136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.433 [2024-07-26 01:10:19.622154] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:49.433 [2024-07-26 01:10:19.622168] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:49.433 [2024-07-26 01:10:19.622182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:49.433 [2024-07-26 01:10:19.622203] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:49.433 [2024-07-26 01:10:19.622218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:49.433 [2024-07-26 01:10:19.622231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:49.433 [2024-07-26 01:10:19.622271] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.433 [2024-07-26 01:10:19.622289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.433 [2024-07-26 01:10:19.622301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.433 [2024-07-26 01:10:19.622317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa1940 (9): Bad file descriptor 00:27:49.433 [2024-07-26 01:10:19.622336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x66f490 (9): Bad file descriptor 00:27:49.433 [2024-07-26 01:10:19.622378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc412b0 (9): Bad file descriptor 00:27:49.433 [2024-07-26 01:10:19.622448] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:49.433 [2024-07-26 01:10:19.622468] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:49.433 [2024-07-26 01:10:19.622483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:49.433 [2024-07-26 01:10:19.622500] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:49.433 [2024-07-26 01:10:19.622513] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:49.433 [2024-07-26 01:10:19.622526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:49.433 [2024-07-26 01:10:19.622585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.433 [2024-07-26 01:10:19.622606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.433 [2024-07-26 01:10:19.622631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.433 [2024-07-26 01:10:19.622646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.433 [2024-07-26 01:10:19.622663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.433 [2024-07-26 01:10:19.622678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.433 [2024-07-26 01:10:19.622695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.433 [2024-07-26 01:10:19.622710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.433 [2024-07-26 01:10:19.622726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.433 [2024-07-26 01:10:19.622739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.433 [2024-07-26 01:10:19.622755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.433 [2024-07-26 01:10:19.622769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.433 [2024-07-26 01:10:19.622785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.433 [2024-07-26 01:10:19.622804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.433 [2024-07-26 01:10:19.622821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.433 [2024-07-26 01:10:19.622835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.433 [2024-07-26 01:10:19.622852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.433 [2024-07-26 01:10:19.622865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.433 [2024-07-26 01:10:19.622881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.433 [2024-07-26 01:10:19.622895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.434 [2024-07-26 01:10:19.622911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.434 [2024-07-26 01:10:19.622925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.434 [2024-07-26 01:10:19.622941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.434 [2024-07-26 01:10:19.622955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.434 [2024-07-26 01:10:19.622971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.434 [2024-07-26 01:10:19.622985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.434 [2024-07-26 01:10:19.623001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.434 [2024-07-26 01:10:19.623015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.434 [2024-07-26 01:10:19.623030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.434 [2024-07-26 01:10:19.623044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.434 [2024-07-26 01:10:19.623072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.434 [2024-07-26 01:10:19.623089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.434 [2024-07-26 01:10:19.623105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.434 [2024-07-26 01:10:19.623119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.434 [2024-07-26 01:10:19.623134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.434 [2024-07-26 01:10:19.623148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.434 [2024-07-26 01:10:19.623164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.434 [2024-07-26 01:10:19.623178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.434 [2024-07-26 01:10:19.623198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.434 [2024-07-26 01:10:19.623213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.434 [2024-07-26 01:10:19.623229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.434 [2024-07-26 01:10:19.623242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.434 [2024-07-26 01:10:19.623259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.434 [2024-07-26 01:10:19.623272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.434 [2024-07-26 01:10:19.623288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.434 [2024-07-26 01:10:19.623302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.434 [2024-07-26 01:10:19.623318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.434 [2024-07-26 01:10:19.623331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.434 [2024-07-26 01:10:19.623347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.434 [2024-07-26 01:10:19.623361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.434 [2024-07-26 01:10:19.623377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.434 [2024-07-26 01:10:19.623390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.434 [2024-07-26 01:10:19.623406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.434 [2024-07-26 01:10:19.623419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.434 [2024-07-26 01:10:19.623436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.434 [2024-07-26 01:10:19.623449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.434 [2024-07-26 01:10:19.623465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.434 [2024-07-26 01:10:19.623479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.434 [2024-07-26 01:10:19.623495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.434 [2024-07-26 01:10:19.623508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.434 [2024-07-26 01:10:19.623524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.434 [2024-07-26 01:10:19.623538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.434 [2024-07-26 01:10:19.623553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.434 [2024-07-26 01:10:19.623571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.434 [2024-07-26 01:10:19.623588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.434 [2024-07-26 01:10:19.623602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.434 [2024-07-26 01:10:19.623619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.434 [2024-07-26 01:10:19.623633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.434 [2024-07-26 01:10:19.623649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.434 [2024-07-26 01:10:19.623663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.434 [2024-07-26 01:10:19.623679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.434 [2024-07-26 01:10:19.623693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.434 [2024-07-26 01:10:19.623709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.434 [2024-07-26 01:10:19.623723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.434 [2024-07-26 01:10:19.623739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.434 [2024-07-26 01:10:19.623753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.434 [2024-07-26 01:10:19.623769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.434 [2024-07-26 01:10:19.623783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.434 [2024-07-26 01:10:19.623799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.434 [2024-07-26 01:10:19.623813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.434 [2024-07-26 01:10:19.623828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.434 [2024-07-26 01:10:19.623841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.434 [2024-07-26 01:10:19.623857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.434 [2024-07-26 01:10:19.623871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.434 [2024-07-26 01:10:19.623887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.435 [2024-07-26 01:10:19.623901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.435 [2024-07-26 01:10:19.623918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.435 [2024-07-26 01:10:19.623932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.435 [2024-07-26 01:10:19.623951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.435 [2024-07-26 01:10:19.623966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.435 [2024-07-26 01:10:19.623982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.435 [2024-07-26 01:10:19.623996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.435 [2024-07-26 01:10:19.624011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.435 [2024-07-26 01:10:19.624025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.435 [2024-07-26 01:10:19.624041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.435 [2024-07-26 01:10:19.624055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.435 [2024-07-26 01:10:19.624079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.435 [2024-07-26 01:10:19.624093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.435 [2024-07-26 01:10:19.624109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.435 [2024-07-26 01:10:19.624123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.435 [2024-07-26 01:10:19.624139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.435 [2024-07-26 01:10:19.624153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.435 [2024-07-26 01:10:19.624168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.435 [2024-07-26 01:10:19.624183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.435 [2024-07-26 01:10:19.624198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.435 [2024-07-26 01:10:19.624212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.435 [2024-07-26 01:10:19.624228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.435 [2024-07-26 01:10:19.624241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.435 [2024-07-26 01:10:19.624257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.435 [2024-07-26 01:10:19.624270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.435 [2024-07-26 01:10:19.624287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.435 [2024-07-26 01:10:19.624300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.435 [2024-07-26 01:10:19.624316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.435 [2024-07-26 01:10:19.624334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.435 [2024-07-26 01:10:19.624350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.435 [2024-07-26 01:10:19.624363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.435 [2024-07-26 01:10:19.624379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.435 [2024-07-26 01:10:19.624393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.435 [2024-07-26 01:10:19.624409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.435 [2024-07-26 01:10:19.624422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.435 [2024-07-26 01:10:19.624438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.435 [2024-07-26 01:10:19.624452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.435 [2024-07-26 01:10:19.624468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.435 [2024-07-26 01:10:19.624482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.435 [2024-07-26 01:10:19.624498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.435 [2024-07-26 01:10:19.624511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.435 [2024-07-26 01:10:19.624527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.435 [2024-07-26 01:10:19.624540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.435 [2024-07-26 01:10:19.624554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1e820 is same with the state(5) to be set 00:27:49.435 [2024-07-26 01:10:19.625816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.435 [2024-07-26 01:10:19.625840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.435 [2024-07-26 01:10:19.625860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.435 [2024-07-26 01:10:19.625876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.435 [2024-07-26 01:10:19.625892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.435 [2024-07-26 01:10:19.625906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.435 [2024-07-26 01:10:19.625922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.435 [2024-07-26 01:10:19.625935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.435 [2024-07-26 01:10:19.625951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.435 [2024-07-26 01:10:19.625965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.435 [2024-07-26 01:10:19.625986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.435 [2024-07-26 01:10:19.626000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.435 [2024-07-26 01:10:19.626016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.435 [2024-07-26 01:10:19.626030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.435 [2024-07-26 01:10:19.626056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.435 [2024-07-26 01:10:19.626078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.435 [2024-07-26 01:10:19.626094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.435 [2024-07-26 01:10:19.626109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.435 [2024-07-26 01:10:19.626125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.435 [2024-07-26 01:10:19.626139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.435 [2024-07-26 01:10:19.626155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.435 [2024-07-26 01:10:19.626170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.435 [2024-07-26 01:10:19.626185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.435 [2024-07-26 01:10:19.626199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.435 [2024-07-26 01:10:19.626214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.435 [2024-07-26 01:10:19.626228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.435 [2024-07-26 01:10:19.626244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.435 [2024-07-26 01:10:19.626258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.435 [2024-07-26 01:10:19.626274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.435 [2024-07-26 01:10:19.626288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.435 [2024-07-26 01:10:19.626304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.435 [2024-07-26 01:10:19.626317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.435 [2024-07-26 01:10:19.626333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.435 [2024-07-26 01:10:19.626346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.436 [2024-07-26 01:10:19.626372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.436 [2024-07-26 01:10:19.626390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.436 [2024-07-26 01:10:19.626405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.436 [2024-07-26 01:10:19.626419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.436 [2024-07-26 01:10:19.626435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.436 [2024-07-26 01:10:19.626448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.436 [2024-07-26 01:10:19.626464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.436 [2024-07-26 01:10:19.626478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.436 [2024-07-26 01:10:19.626494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.436 [2024-07-26 01:10:19.626507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.436 [2024-07-26 01:10:19.626523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.436 [2024-07-26 01:10:19.626537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.436 [2024-07-26 01:10:19.626553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.436 [2024-07-26 01:10:19.626567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.436 [2024-07-26 01:10:19.626582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.436 [2024-07-26 01:10:19.626596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.436 [2024-07-26 01:10:19.626612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.436 [2024-07-26 01:10:19.626625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.436 [2024-07-26 01:10:19.626641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.436 [2024-07-26 01:10:19.626654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.436 [2024-07-26 01:10:19.626670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.436 [2024-07-26 01:10:19.626684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.436 [2024-07-26 01:10:19.626700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.436 [2024-07-26 01:10:19.626713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.436 [2024-07-26 01:10:19.626729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.436 [2024-07-26 01:10:19.626743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.436 [2024-07-26 01:10:19.626762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.436 [2024-07-26 01:10:19.626777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.436 [2024-07-26 01:10:19.626792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.436 [2024-07-26 01:10:19.626806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.436 [2024-07-26 01:10:19.626822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.436 [2024-07-26 01:10:19.626835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.436 [2024-07-26 01:10:19.626851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.436 [2024-07-26 01:10:19.626864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.436 [2024-07-26 01:10:19.626880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.436 [2024-07-26 01:10:19.626894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.436 [2024-07-26 01:10:19.626909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.436 [2024-07-26 01:10:19.626923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.436 [2024-07-26 01:10:19.626939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.436 [2024-07-26 01:10:19.626953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.436 [2024-07-26 01:10:19.626968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.436 [2024-07-26 01:10:19.626981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.436 [2024-07-26 01:10:19.626997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.436 [2024-07-26 01:10:19.627011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.436 [2024-07-26 01:10:19.627027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.436 [2024-07-26 01:10:19.627040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.436 [2024-07-26 01:10:19.627055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.436 [2024-07-26 01:10:19.627080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.436 [2024-07-26 01:10:19.627096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.436 [2024-07-26 01:10:19.627110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.436 [2024-07-26 01:10:19.627126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.436 [2024-07-26 01:10:19.627143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.436 [2024-07-26 01:10:19.627160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.436 [2024-07-26 01:10:19.627174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.436 [2024-07-26 01:10:19.627190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.436 [2024-07-26 01:10:19.627203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.436 [2024-07-26 01:10:19.627219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.436 [2024-07-26 01:10:19.627232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.436 [2024-07-26 01:10:19.627249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.436 [2024-07-26 01:10:19.627262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.436 [2024-07-26 01:10:19.627277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.436 [2024-07-26 01:10:19.627291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.436 [2024-07-26 01:10:19.627306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.436 [2024-07-26 01:10:19.627319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.436 [2024-07-26 01:10:19.627335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.436 [2024-07-26 01:10:19.627348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.436 [2024-07-26 01:10:19.627364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.436 [2024-07-26 01:10:19.627377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.436 [2024-07-26 01:10:19.627393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.436 [2024-07-26 01:10:19.627406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.436 [2024-07-26 01:10:19.627422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.436 [2024-07-26 01:10:19.627435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.436 [2024-07-26 01:10:19.627451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.436 [2024-07-26 01:10:19.627464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.436 [2024-07-26 01:10:19.627480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.436 [2024-07-26 01:10:19.627493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.436 [2024-07-26 01:10:19.627512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.436 [2024-07-26 01:10:19.627527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.437 [2024-07-26 01:10:19.627542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.437 [2024-07-26 01:10:19.627556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.437 [2024-07-26 01:10:19.627572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.437 [2024-07-26 01:10:19.627585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.437 [2024-07-26 01:10:19.627601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.437 [2024-07-26 01:10:19.627615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.437 [2024-07-26 01:10:19.627632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.437 [2024-07-26 01:10:19.627645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.437 [2024-07-26 01:10:19.627661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.437 [2024-07-26 01:10:19.627674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.437 [2024-07-26 01:10:19.627690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.437 [2024-07-26 01:10:19.627703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.437 [2024-07-26 01:10:19.627719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.437 [2024-07-26 01:10:19.627732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.437 [2024-07-26 01:10:19.627748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.437 [2024-07-26 01:10:19.627761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.437 [2024-07-26 01:10:19.627776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1fcb0 is same with the state(5) to be set 00:27:49.437 [2024-07-26 01:10:19.629001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.437 [2024-07-26 01:10:19.629025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.437 [2024-07-26 01:10:19.629042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:49.437 [2024-07-26 01:10:19.629066] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:49.437 [2024-07-26 01:10:19.629370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.437 [2024-07-26 01:10:19.629399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaacc40 with addr=10.0.0.2, port=4420 00:27:49.437 [2024-07-26 01:10:19.629415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacc40 is same with the state(5) to be set 00:27:49.437 [2024-07-26 01:10:19.629549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.437 [2024-07-26 01:10:19.629574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaac3b0 with addr=10.0.0.2, port=4420 00:27:49.437 [2024-07-26 01:10:19.629590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac3b0 is same with the state(5) to be set 00:27:49.437 [2024-07-26 01:10:19.630157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaacc40 (9): Bad file descriptor 00:27:49.437 [2024-07-26 01:10:19.630185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaac3b0 (9): Bad file descriptor 00:27:49.437 [2024-07-26 01:10:19.630270] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:49.437 [2024-07-26 01:10:19.630291] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:49.437 [2024-07-26 01:10:19.630307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:49.437 [2024-07-26 01:10:19.630324] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:49.437 [2024-07-26 01:10:19.630338] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:49.437 [2024-07-26 01:10:19.630351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:49.437 [2024-07-26 01:10:19.630411] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:49.437 [2024-07-26 01:10:19.630433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:49.437 [2024-07-26 01:10:19.630450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.437 [2024-07-26 01:10:19.630463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.437 [2024-07-26 01:10:19.630590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.437 [2024-07-26 01:10:19.630617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc3e910 with addr=10.0.0.2, port=4420 00:27:49.437 [2024-07-26 01:10:19.630632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3e910 is same with the state(5) to be set 00:27:49.437 [2024-07-26 01:10:19.630735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.437 [2024-07-26 01:10:19.630760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa98a0 with addr=10.0.0.2, port=4420 00:27:49.437 [2024-07-26 01:10:19.630776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98a0 is same with the state(5) to be set 00:27:49.437 [2024-07-26 01:10:19.630811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc3e910 (9): Bad file descriptor 00:27:49.437 [2024-07-26 01:10:19.630833] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa98a0 (9): Bad file descriptor 00:27:49.437 [2024-07-26 01:10:19.630864] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:49.437 [2024-07-26 01:10:19.630880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:49.437 [2024-07-26 01:10:19.630894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:49.437 [2024-07-26 01:10:19.630910] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:49.437 [2024-07-26 01:10:19.630924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:49.437 [2024-07-26 01:10:19.630937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:49.437 [2024-07-26 01:10:19.630974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.437 [2024-07-26 01:10:19.630996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.437 [2024-07-26 01:10:19.631077] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:49.437 [2024-07-26 01:10:19.631110] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:49.437 [2024-07-26 01:10:19.631127] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.437 [2024-07-26 01:10:19.631271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.437 [2024-07-26 01:10:19.631298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaad280 with addr=10.0.0.2, port=4420 00:27:49.437 [2024-07-26 01:10:19.631313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad280 is same with the state(5) to be set 00:27:49.437 [2024-07-26 01:10:19.631423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.437 [2024-07-26 01:10:19.631448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x597610 with addr=10.0.0.2, port=4420 00:27:49.437 [2024-07-26 01:10:19.631464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x597610 is same with the state(5) to be set 00:27:49.437 [2024-07-26 01:10:19.631580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.437 [2024-07-26 01:10:19.631605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x668ee0 with addr=10.0.0.2, port=4420 00:27:49.437 [2024-07-26 01:10:19.631620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x668ee0 is same with the state(5) to be set 00:27:49.437 [2024-07-26 01:10:19.631655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaad280 (9): Bad file descriptor 00:27:49.437 [2024-07-26 01:10:19.631677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x597610 (9): Bad file descriptor 00:27:49.437 [2024-07-26 01:10:19.631695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x668ee0 (9): Bad file descriptor 00:27:49.437 [2024-07-26 01:10:19.631724] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:49.437 [2024-07-26 01:10:19.631740] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:49.437 [2024-07-26 01:10:19.631754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:49.437 [2024-07-26 01:10:19.631771] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:49.437 [2024-07-26 01:10:19.631785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:49.437 [2024-07-26 01:10:19.631798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:49.437 [2024-07-26 01:10:19.631814] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.437 [2024-07-26 01:10:19.631827] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.437 [2024-07-26 01:10:19.631840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.437 [2024-07-26 01:10:19.631886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.437 [2024-07-26 01:10:19.631906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.437 [2024-07-26 01:10:19.631918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.437 [2024-07-26 01:10:19.631953] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:49.437 [2024-07-26 01:10:19.631973] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:49.437 [2024-07-26 01:10:19.632108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.437 [2024-07-26 01:10:19.632143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x66f490 with addr=10.0.0.2, port=4420 00:27:49.437 [2024-07-26 01:10:19.632159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66f490 is same with the state(5) to be set 00:27:49.437 [2024-07-26 01:10:19.632282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.437 [2024-07-26 01:10:19.632307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa1940 with addr=10.0.0.2, port=4420 00:27:49.438 [2024-07-26 01:10:19.632322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa1940 is same with the state(5) to be set 00:27:49.438 [2024-07-26 01:10:19.632358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x66f490 (9): Bad file descriptor 00:27:49.438 [2024-07-26 01:10:19.632380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa1940 (9): Bad file descriptor 00:27:49.438 [2024-07-26 01:10:19.632428] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:49.438 [2024-07-26 01:10:19.632447] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:49.438 [2024-07-26 01:10:19.632463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:49.438 [2024-07-26 01:10:19.632480] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:49.438 [2024-07-26 01:10:19.632493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:49.438 [2024-07-26 01:10:19.632506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:49.438 [2024-07-26 01:10:19.632571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.438 [2024-07-26 01:10:19.632593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.438 [2024-07-26 01:10:19.632619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.438 [2024-07-26 01:10:19.632635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.438 [2024-07-26 01:10:19.632653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.438 [2024-07-26 01:10:19.632666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.438 [2024-07-26 01:10:19.632682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.438 [2024-07-26 01:10:19.632696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.438 [2024-07-26 01:10:19.632712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.438 [2024-07-26 01:10:19.632726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.438 [2024-07-26 01:10:19.632743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.438 [2024-07-26 01:10:19.632756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.438 [2024-07-26 01:10:19.632773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.438 [2024-07-26 01:10:19.632787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.438 [2024-07-26 01:10:19.632807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.438 [2024-07-26 01:10:19.632822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.438 [2024-07-26 01:10:19.632838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.438 [2024-07-26 01:10:19.632851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.438 [2024-07-26 01:10:19.632867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.438 [2024-07-26 01:10:19.632881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.438 [2024-07-26 01:10:19.632896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.438 [2024-07-26 01:10:19.632910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.438 [2024-07-26 01:10:19.632926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.438 [2024-07-26 01:10:19.632940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.438 [2024-07-26 01:10:19.632955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.438 [2024-07-26 01:10:19.632969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.438 [2024-07-26 01:10:19.632985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.438 [2024-07-26 01:10:19.632998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.438 [2024-07-26 01:10:19.633014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.438 [2024-07-26 01:10:19.633028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.438 [2024-07-26 01:10:19.633044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.438 [2024-07-26 01:10:19.633057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.438 [2024-07-26 01:10:19.633083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.438 [2024-07-26 01:10:19.633098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.438 [2024-07-26 01:10:19.633114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.438 [2024-07-26 01:10:19.633128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.438 [2024-07-26 01:10:19.633144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.438 [2024-07-26 01:10:19.633157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.438 [2024-07-26 01:10:19.633173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.438 [2024-07-26 01:10:19.633190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.438 [2024-07-26 01:10:19.633207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.438 [2024-07-26 01:10:19.633221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.438 [2024-07-26 01:10:19.633237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.438 [2024-07-26 01:10:19.633250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.438 [2024-07-26 01:10:19.633266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.438 [2024-07-26 01:10:19.633280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.438 [2024-07-26 01:10:19.633296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.438 [2024-07-26 01:10:19.633309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.438 [2024-07-26 01:10:19.633325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.438 [2024-07-26 01:10:19.633338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.438 [2024-07-26 01:10:19.633354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.438 [2024-07-26 01:10:19.633368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.438 [2024-07-26 01:10:19.633383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.438 [2024-07-26 01:10:19.633396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.438 [2024-07-26 01:10:19.633412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.438 [2024-07-26 01:10:19.633426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.438 [2024-07-26 01:10:19.633442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.438 [2024-07-26 01:10:19.633455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.438 [2024-07-26 01:10:19.633470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.438 [2024-07-26 01:10:19.633484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.439 [2024-07-26 01:10:19.633500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.439 [2024-07-26 01:10:19.633514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.439 [2024-07-26 01:10:19.633530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.439 [2024-07-26 01:10:19.633543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.439 [2024-07-26 01:10:19.633562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.439 [2024-07-26 01:10:19.633577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.439 [2024-07-26 01:10:19.633593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.439 [2024-07-26 01:10:19.633607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.439 [2024-07-26 01:10:19.633622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.439 [2024-07-26 01:10:19.633636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.439 [2024-07-26 01:10:19.633652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.439 [2024-07-26 01:10:19.633665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.439 [2024-07-26 01:10:19.633681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.439 [2024-07-26 01:10:19.633694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.439 [2024-07-26 01:10:19.633710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.439 [2024-07-26 01:10:19.633724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.439 [2024-07-26 01:10:19.633740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.439 [2024-07-26 01:10:19.633753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.439 [2024-07-26 01:10:19.633769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.439 [2024-07-26 01:10:19.633782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.439 [2024-07-26 01:10:19.633798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.439 [2024-07-26 01:10:19.633811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.439 [2024-07-26 01:10:19.633827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.439 [2024-07-26 01:10:19.633841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.439 [2024-07-26 01:10:19.633856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.439 [2024-07-26 01:10:19.633870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.439 [2024-07-26 01:10:19.633886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.439 [2024-07-26 01:10:19.633900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.439 [2024-07-26 01:10:19.633915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.439 [2024-07-26 01:10:19.633932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.439 [2024-07-26 01:10:19.633948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.439 [2024-07-26 01:10:19.633962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.439 [2024-07-26 01:10:19.633979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.439 [2024-07-26 01:10:19.633992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.439 [2024-07-26 01:10:19.634008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.439 [2024-07-26 01:10:19.634021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.439 [2024-07-26 01:10:19.634037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.439 [2024-07-26 01:10:19.634050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.439 [2024-07-26 01:10:19.634071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.439 [2024-07-26 01:10:19.634086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.439 [2024-07-26 01:10:19.634102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.439 [2024-07-26 01:10:19.634116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.439 [2024-07-26 01:10:19.634131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.439 [2024-07-26 01:10:19.634145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.439 [2024-07-26 01:10:19.634161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.439 [2024-07-26 01:10:19.634174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.439 [2024-07-26 01:10:19.634190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.439 [2024-07-26 01:10:19.634204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.439 [2024-07-26 01:10:19.634220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.439 [2024-07-26 01:10:19.634234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.439 [2024-07-26 01:10:19.634249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.439 [2024-07-26 01:10:19.634263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.439 [2024-07-26 01:10:19.634278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.439 [2024-07-26 01:10:19.634292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.439 [2024-07-26 01:10:19.634312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.439 [2024-07-26 01:10:19.634326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.439 [2024-07-26 01:10:19.634342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.439 [2024-07-26 01:10:19.634356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.439 [2024-07-26 01:10:19.634371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.439 [2024-07-26 01:10:19.634385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.439 [2024-07-26 01:10:19.634400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.439 [2024-07-26 01:10:19.634414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.439 [2024-07-26 01:10:19.634431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.439 [2024-07-26 01:10:19.634444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.439 [2024-07-26 01:10:19.634460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.439 [2024-07-26 01:10:19.634473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.439 [2024-07-26 01:10:19.634489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.439 [2024-07-26 01:10:19.634502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.439 [2024-07-26 01:10:19.634516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02d10 is same with the state(5) to be set 00:27:49.439 [2024-07-26 01:10:19.636153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.439 [2024-07-26 01:10:19.636177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.439 task offset: 25984 on job bdev=Nvme1n1 fails 00:27:49.439 00:27:49.439 Latency(us) 00:27:49.439 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:49.439 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.439 Job: Nvme1n1 ended in about 0.88 seconds with error 00:27:49.439 Verification LBA range: start 0x0 length 0x400 00:27:49.439 Nvme1n1 : 0.88 218.87 13.68 72.96 0.00 216746.10 4878.79 253211.69 00:27:49.439 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.439 Job: Nvme2n1 ended in about 0.88 seconds with error 00:27:49.440 Verification LBA range: start 0x0 length 0x400 00:27:49.440 Nvme2n1 : 0.88 215.32 13.46 6.80 0.00 278271.65 20388.98 253211.69 00:27:49.440 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.440 Job: Nvme3n1 ended in about 0.90 seconds with error 00:27:49.440 Verification LBA range: start 0x0 length 0x400 00:27:49.440 Nvme3n1 : 0.90 217.16 13.57 70.91 0.00 210611.79 18641.35 253211.69 00:27:49.440 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.440 Job: Nvme4n1 ended in about 0.89 seconds with error 00:27:49.440 Verification LBA range: start 0x0 length 0x400 00:27:49.440 Nvme4n1 : 0.89 219.94 13.75 71.82 0.00 203313.49 7184.69 251658.24 00:27:49.440 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.440 Job: Nvme5n1 ended in about 0.91 seconds with error 00:27:49.440 Verification LBA range: start 0x0 length 0x400 00:27:49.440 Nvme5n1 : 0.91 140.21 8.76 70.10 0.00 276550.29 40001.23 254765.13 00:27:49.440 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.440 Job: Nvme6n1 ended in about 0.92 seconds with error 00:27:49.440 Verification LBA range: start 0x0 length 0x400 00:27:49.440 Nvme6n1 : 0.92 139.72 8.73 69.86 0.00 271499.06 20777.34 251658.24 00:27:49.440 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.440 Verification LBA range: start 0x0 length 0x400 00:27:49.440 Nvme7n1 : 0.89 216.63 13.54 0.00 0.00 255447.80 16990.81 253211.69 00:27:49.440 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.440 Job: Nvme8n1 ended in about 0.90 seconds with error 00:27:49.440 Verification LBA range: start 0x0 length 0x400 00:27:49.440 Nvme8n1 : 0.90 147.15 9.20 71.34 0.00 247964.84 18738.44 251658.24 00:27:49.440 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.440 Job: Nvme9n1 ended in about 0.92 seconds with error 00:27:49.440 Verification LBA range: start 0x0 length 0x400 00:27:49.440 Nvme9n1 : 0.92 138.70 8.67 69.35 0.00 255944.88 20971.52 257872.02 00:27:49.440 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.440 Job: Nvme10n1 ended in about 0.91 seconds with error 00:27:49.440 Verification LBA range: start 0x0 length 0x400 00:27:49.440 Nvme10n1 : 0.91 141.30 8.83 70.65 0.00 244408.07 21456.97 282727.16 00:27:49.440 =================================================================================================================== 00:27:49.440 Total : 1794.98 112.19 573.79 0.00 242758.06 4878.79 282727.16 00:27:49.440 [2024-07-26 01:10:19.661583] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:49.440 [2024-07-26 01:10:19.661667] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:49.440 [2024-07-26 01:10:19.662047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.440 [2024-07-26 01:10:19.662093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc412b0 with addr=10.0.0.2, port=4420 00:27:49.440 [2024-07-26 01:10:19.662114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc412b0 is same with the state(5) to be set 00:27:49.440 [2024-07-26 01:10:19.662575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc412b0 (9): Bad file descriptor 00:27:49.440 [2024-07-26 01:10:19.662925] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:49.440 [2024-07-26 01:10:19.662956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:49.440 [2024-07-26 01:10:19.662974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:49.440 [2024-07-26 01:10:19.662990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:49.440 [2024-07-26 01:10:19.663006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:49.440 [2024-07-26 01:10:19.663073] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:49.440 [2024-07-26 01:10:19.663092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:49.440 [2024-07-26 01:10:19.663109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:49.440 [2024-07-26 01:10:19.663153] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:49.440 [2024-07-26 01:10:19.663175] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:49.440 [2024-07-26 01:10:19.663192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:49.440 [2024-07-26 01:10:19.663218] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:49.440 [2024-07-26 01:10:19.663249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.440 [2024-07-26 01:10:19.663401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.440 [2024-07-26 01:10:19.663428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaac3b0 with addr=10.0.0.2, port=4420 00:27:49.440 [2024-07-26 01:10:19.663446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac3b0 is same with the state(5) to be set 00:27:49.440 [2024-07-26 01:10:19.663552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.440 [2024-07-26 01:10:19.663579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaacc40 with addr=10.0.0.2, port=4420 00:27:49.440 [2024-07-26 01:10:19.663595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacc40 is same with the state(5) to be set 00:27:49.440 [2024-07-26 01:10:19.663696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.440 [2024-07-26 01:10:19.663722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa98a0 with addr=10.0.0.2, port=4420 00:27:49.440 [2024-07-26 01:10:19.663738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa98a0 is same with the state(5) to be set 00:27:49.440 [2024-07-26 01:10:19.663846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.440 [2024-07-26 01:10:19.663872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc3e910 with addr=10.0.0.2, port=4420 00:27:49.440 [2024-07-26 01:10:19.663888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3e910 is same with the state(5) to be set 00:27:49.440 [2024-07-26 01:10:19.663987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.440 [2024-07-26 01:10:19.664013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x668ee0 with addr=10.0.0.2, port=4420 00:27:49.440 [2024-07-26 01:10:19.664029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x668ee0 is same with the state(5) to be set 00:27:49.440 [2024-07-26 01:10:19.664197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.440 [2024-07-26 01:10:19.664225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x597610 with addr=10.0.0.2, port=4420 00:27:49.440 [2024-07-26 01:10:19.664241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x597610 is same with the state(5) to be set 00:27:49.440 [2024-07-26 01:10:19.664338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.440 [2024-07-26 01:10:19.664365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaad280 with addr=10.0.0.2, port=4420 00:27:49.440 [2024-07-26 01:10:19.664380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad280 is same with the state(5) to be set 00:27:49.440 [2024-07-26 01:10:19.664475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.440 [2024-07-26 01:10:19.664501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa1940 with addr=10.0.0.2, port=4420 00:27:49.440 [2024-07-26 01:10:19.664517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa1940 is same with the state(5) to be set 00:27:49.440 [2024-07-26 01:10:19.664618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.440 [2024-07-26 01:10:19.664644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x66f490 with addr=10.0.0.2, port=4420 00:27:49.440 [2024-07-26 01:10:19.664660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66f490 is same with the state(5) to be set 00:27:49.440 [2024-07-26 01:10:19.664678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaac3b0 (9): Bad file descriptor 00:27:49.440 [2024-07-26 01:10:19.664703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaacc40 (9): Bad file descriptor 00:27:49.440 [2024-07-26 01:10:19.664722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa98a0 (9): Bad file descriptor 00:27:49.440 [2024-07-26 01:10:19.664739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc3e910 (9): Bad file descriptor 00:27:49.440 [2024-07-26 01:10:19.664756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x668ee0 (9): Bad file descriptor 00:27:49.440 [2024-07-26 01:10:19.664797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x597610 (9): Bad file descriptor 00:27:49.440 [2024-07-26 01:10:19.664820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaad280 (9): Bad file descriptor 00:27:49.440 [2024-07-26 01:10:19.664838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa1940 (9): Bad file descriptor 00:27:49.440 [2024-07-26 01:10:19.664855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x66f490 (9): Bad file descriptor 00:27:49.440 [2024-07-26 01:10:19.664871] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:49.440 [2024-07-26 01:10:19.664884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:49.440 [2024-07-26 01:10:19.664897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:49.440 [2024-07-26 01:10:19.664914] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:49.440 [2024-07-26 01:10:19.664927] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:49.440 [2024-07-26 01:10:19.664940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:49.440 [2024-07-26 01:10:19.664955] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:49.440 [2024-07-26 01:10:19.664968] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:49.440 [2024-07-26 01:10:19.664982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:49.440 [2024-07-26 01:10:19.664996] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:49.440 [2024-07-26 01:10:19.665010] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:49.440 [2024-07-26 01:10:19.665022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:49.441 [2024-07-26 01:10:19.665037] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:49.441 [2024-07-26 01:10:19.665050] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:49.441 [2024-07-26 01:10:19.665072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:49.441 [2024-07-26 01:10:19.665111] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.441 [2024-07-26 01:10:19.665129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.441 [2024-07-26 01:10:19.665141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.441 [2024-07-26 01:10:19.665153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.441 [2024-07-26 01:10:19.665165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.441 [2024-07-26 01:10:19.665177] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:49.441 [2024-07-26 01:10:19.665189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:49.441 [2024-07-26 01:10:19.665207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:49.441 [2024-07-26 01:10:19.665224] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:49.441 [2024-07-26 01:10:19.665238] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:49.441 [2024-07-26 01:10:19.665253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:49.441 [2024-07-26 01:10:19.665268] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:49.441 [2024-07-26 01:10:19.665281] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:49.441 [2024-07-26 01:10:19.665294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:49.441 [2024-07-26 01:10:19.665309] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:49.441 [2024-07-26 01:10:19.665321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:49.441 [2024-07-26 01:10:19.665334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:49.441 [2024-07-26 01:10:19.665371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.441 [2024-07-26 01:10:19.665388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.441 [2024-07-26 01:10:19.665401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:49.441 [2024-07-26 01:10:19.665413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.007 01:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:27:50.007 01:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:27:50.944 01:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1912453 00:27:50.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1912453) - No such process 00:27:50.944 01:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:27:50.944 01:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:27:50.944 01:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:50.944 01:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:50.944 01:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:50.944 01:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:50.944 01:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:50.944 01:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:27:50.944 01:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:50.944 01:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:27:50.944 01:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:50.944 01:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:50.944 rmmod nvme_tcp 00:27:50.944 rmmod nvme_fabrics 00:27:50.944 rmmod nvme_keyring 00:27:50.944 01:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:50.944 01:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:27:50.944 01:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:27:50.944 01:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:50.944 01:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:50.944 01:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:50.944 01:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:50.944 01:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:50.944 01:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:50.944 01:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:50.944 01:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:50.944 01:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.877 01:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:52.877 00:27:52.877 real 0m7.501s 00:27:52.877 user 0m18.525s 00:27:52.877 sys 0m1.463s 00:27:52.877 01:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:52.877 01:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:52.877 ************************************ 00:27:52.877 END TEST nvmf_shutdown_tc3 00:27:52.877 ************************************ 00:27:52.877 01:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:27:52.877 00:27:52.877 real 0m27.224s 00:27:52.877 user 1m15.591s 00:27:52.877 sys 0m6.521s 00:27:52.877 01:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:52.877 01:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:52.877 ************************************ 00:27:52.877 END TEST nvmf_shutdown 00:27:52.877 ************************************ 00:27:52.877 01:10:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:27:52.877 00:27:52.877 real 16m47.924s 00:27:52.877 user 47m18.880s 00:27:52.877 sys 3m52.243s 00:27:52.877 01:10:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:52.877 01:10:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:52.877 ************************************ 00:27:52.877 END TEST nvmf_target_extra 00:27:52.877 ************************************ 00:27:53.142 01:10:23 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:27:53.142 01:10:23 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:53.142 01:10:23 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:53.142 01:10:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:53.142 ************************************ 00:27:53.142 START TEST nvmf_host 00:27:53.142 ************************************ 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:27:53.142 * Looking for test storage... 00:27:53.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.142 ************************************ 00:27:53.142 START TEST nvmf_multicontroller 00:27:53.142 ************************************ 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:53.142 * Looking for test storage... 00:27:53.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:53.142 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:53.143 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.143 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.143 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.143 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:27:53.143 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.143 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:27:53.143 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:53.143 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:53.143 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:53.143 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:53.143 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:53.143 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:53.143 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:53.143 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:53.143 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:53.143 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:53.143 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:53.143 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:53.143 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:53.143 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:53.143 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:53.143 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:53.143 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:53.143 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:53.143 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:53.143 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:53.143 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:53.143 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:53.143 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:53.143 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:53.143 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:53.143 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:27:53.143 01:10:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:55.045 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:55.045 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:27:55.045 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:55.045 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:55.045 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:55.045 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:55.045 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:55.045 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:27:55.045 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:55.045 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:27:55.045 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:27:55.045 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:55.046 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:55.046 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:55.046 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:55.046 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:55.046 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:55.305 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:55.305 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:55.305 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:55.305 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:55.305 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:55.305 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:55.305 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:55.305 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:55.305 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:55.305 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:55.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:55.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:27:55.305 00:27:55.305 --- 10.0.0.2 ping statistics --- 00:27:55.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:55.305 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:27:55.305 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:55.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:55.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:27:55.305 00:27:55.305 --- 10.0.0.1 ping statistics --- 00:27:55.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:55.305 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:27:55.305 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:55.305 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:27:55.305 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:55.305 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:55.305 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:55.305 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:55.305 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:55.305 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:55.305 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:55.305 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:27:55.305 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:55.305 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:55.305 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:55.305 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1914997 00:27:55.305 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:55.305 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1914997 00:27:55.305 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1914997 ']' 00:27:55.305 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:55.305 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:55.305 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:55.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:55.305 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:55.305 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:55.305 [2024-07-26 01:10:25.678163] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:27:55.305 [2024-07-26 01:10:25.678247] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:55.305 EAL: No free 2048 kB hugepages reported on node 1 00:27:55.563 [2024-07-26 01:10:25.743496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:55.563 [2024-07-26 01:10:25.832751] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:55.563 [2024-07-26 01:10:25.832813] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:55.563 [2024-07-26 01:10:25.832840] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:55.563 [2024-07-26 01:10:25.832851] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:55.563 [2024-07-26 01:10:25.832861] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:55.563 [2024-07-26 01:10:25.832950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:55.563 [2024-07-26 01:10:25.833084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:55.563 [2024-07-26 01:10:25.833088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:55.563 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:55.563 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:27:55.563 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:55.564 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:55.564 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:55.564 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:55.564 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:55.564 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.564 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:55.564 [2024-07-26 01:10:25.969346] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:55.564 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.564 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:55.564 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.564 01:10:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:55.822 Malloc0 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:55.822 [2024-07-26 01:10:26.028081] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:55.822 [2024-07-26 01:10:26.035943] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:55.822 Malloc1 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1915136 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1915136 /var/tmp/bdevperf.sock 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1915136 ']' 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:55.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:55.822 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:56.081 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:56.081 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:27:56.081 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:56.081 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.081 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:56.081 NVMe0n1 00:27:56.081 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.081 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:56.081 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:27:56.081 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.081 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:56.081 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.081 1 00:27:56.081 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:56.081 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:27:56.081 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:56.081 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:56.081 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:56.081 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:56.081 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:56.081 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:56.081 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.081 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:56.081 request: 00:27:56.081 { 00:27:56.081 "name": "NVMe0", 00:27:56.081 "trtype": "tcp", 00:27:56.081 "traddr": "10.0.0.2", 00:27:56.081 "adrfam": "ipv4", 00:27:56.081 "trsvcid": "4420", 00:27:56.081 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:56.081 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:27:56.081 "hostaddr": "10.0.0.2", 00:27:56.081 "hostsvcid": "60000", 00:27:56.081 "prchk_reftag": false, 00:27:56.081 "prchk_guard": false, 00:27:56.081 "hdgst": false, 00:27:56.081 "ddgst": false, 00:27:56.081 "method": "bdev_nvme_attach_controller", 00:27:56.081 "req_id": 1 00:27:56.081 } 00:27:56.081 Got JSON-RPC error response 00:27:56.081 response: 00:27:56.081 { 00:27:56.081 "code": -114, 00:27:56.081 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:56.081 } 00:27:56.081 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:56.081 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:27:56.081 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:56.081 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:56.081 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:56.081 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:56.081 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:27:56.082 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:56.082 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:56.082 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:56.082 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:56.082 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:56.082 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:56.082 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.082 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:56.339 request: 00:27:56.339 { 00:27:56.339 "name": "NVMe0", 00:27:56.339 "trtype": "tcp", 00:27:56.339 "traddr": "10.0.0.2", 00:27:56.339 "adrfam": "ipv4", 00:27:56.339 "trsvcid": "4420", 00:27:56.339 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:56.339 "hostaddr": "10.0.0.2", 00:27:56.339 "hostsvcid": "60000", 00:27:56.339 "prchk_reftag": false, 00:27:56.339 "prchk_guard": false, 00:27:56.339 "hdgst": false, 00:27:56.339 "ddgst": false, 00:27:56.339 "method": "bdev_nvme_attach_controller", 00:27:56.339 "req_id": 1 00:27:56.339 } 00:27:56.339 Got JSON-RPC error response 00:27:56.339 response: 00:27:56.339 { 00:27:56.339 "code": -114, 00:27:56.339 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:56.339 } 00:27:56.339 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:56.339 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:27:56.339 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:56.340 request: 00:27:56.340 { 00:27:56.340 "name": "NVMe0", 00:27:56.340 "trtype": "tcp", 00:27:56.340 "traddr": "10.0.0.2", 00:27:56.340 "adrfam": "ipv4", 00:27:56.340 "trsvcid": "4420", 00:27:56.340 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:56.340 "hostaddr": "10.0.0.2", 00:27:56.340 "hostsvcid": "60000", 00:27:56.340 "prchk_reftag": false, 00:27:56.340 "prchk_guard": false, 00:27:56.340 "hdgst": false, 00:27:56.340 "ddgst": false, 00:27:56.340 "multipath": "disable", 00:27:56.340 "method": "bdev_nvme_attach_controller", 00:27:56.340 "req_id": 1 00:27:56.340 } 00:27:56.340 Got JSON-RPC error response 00:27:56.340 response: 00:27:56.340 { 00:27:56.340 "code": -114, 00:27:56.340 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:27:56.340 } 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:56.340 request: 00:27:56.340 { 00:27:56.340 "name": "NVMe0", 00:27:56.340 "trtype": "tcp", 00:27:56.340 "traddr": "10.0.0.2", 00:27:56.340 "adrfam": "ipv4", 00:27:56.340 "trsvcid": "4420", 00:27:56.340 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:56.340 "hostaddr": "10.0.0.2", 00:27:56.340 "hostsvcid": "60000", 00:27:56.340 "prchk_reftag": false, 00:27:56.340 "prchk_guard": false, 00:27:56.340 "hdgst": false, 00:27:56.340 "ddgst": false, 00:27:56.340 "multipath": "failover", 00:27:56.340 "method": "bdev_nvme_attach_controller", 00:27:56.340 "req_id": 1 00:27:56.340 } 00:27:56.340 Got JSON-RPC error response 00:27:56.340 response: 00:27:56.340 { 00:27:56.340 "code": -114, 00:27:56.340 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:56.340 } 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:56.340 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.340 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:56.598 00:27:56.598 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.598 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:56.598 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:27:56.598 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.598 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:56.598 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.598 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:27:56.598 01:10:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:57.972 0 00:27:57.972 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1915136 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1915136 ']' 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1915136 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1915136 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1915136' 00:27:57.973 killing process with pid 1915136 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1915136 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1915136 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:27:57.973 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:57.973 [2024-07-26 01:10:26.141515] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:27:57.973 [2024-07-26 01:10:26.141611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1915136 ] 00:27:57.973 EAL: No free 2048 kB hugepages reported on node 1 00:27:57.973 [2024-07-26 01:10:26.200850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:57.973 [2024-07-26 01:10:26.287417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:57.973 [2024-07-26 01:10:26.889131] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name c0b67475-9769-47a5-ac8e-573b62777832 already exists 00:27:57.973 [2024-07-26 01:10:26.889174] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:c0b67475-9769-47a5-ac8e-573b62777832 alias for bdev NVMe1n1 00:27:57.973 [2024-07-26 01:10:26.889189] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:27:57.973 Running I/O for 1 seconds... 00:27:57.973 00:27:57.973 Latency(us) 00:27:57.973 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:57.973 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:27:57.973 NVMe0n1 : 1.01 17064.80 66.66 0.00 0.00 7468.14 5631.24 12913.02 00:27:57.973 =================================================================================================================== 00:27:57.973 Total : 17064.80 66.66 0.00 0.00 7468.14 5631.24 12913.02 00:27:57.973 Received shutdown signal, test time was about 1.000000 seconds 00:27:57.973 00:27:57.973 Latency(us) 00:27:57.973 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:57.973 =================================================================================================================== 00:27:57.973 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:57.973 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:57.973 rmmod nvme_tcp 00:27:57.973 rmmod nvme_fabrics 00:27:57.973 rmmod nvme_keyring 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1914997 ']' 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1914997 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1914997 ']' 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1914997 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1914997 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1914997' 00:27:57.973 killing process with pid 1914997 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1914997 00:27:57.973 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1914997 00:27:58.540 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:58.540 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:58.540 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:58.540 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:58.540 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:58.540 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:58.540 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:58.540 01:10:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:00.446 01:10:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:00.446 00:28:00.446 real 0m7.303s 00:28:00.446 user 0m11.388s 00:28:00.446 sys 0m2.282s 00:28:00.446 01:10:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:00.446 01:10:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:00.446 ************************************ 00:28:00.446 END TEST nvmf_multicontroller 00:28:00.446 ************************************ 00:28:00.446 01:10:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:00.446 01:10:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:00.446 01:10:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:00.446 01:10:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.446 ************************************ 00:28:00.446 START TEST nvmf_aer 00:28:00.446 ************************************ 00:28:00.446 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:00.446 * Looking for test storage... 00:28:00.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:28:00.447 01:10:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:02.348 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:02.348 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:02.348 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:02.348 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:02.348 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:02.606 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:02.606 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:02.606 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:02.606 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:02.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:02.606 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:28:02.606 00:28:02.606 --- 10.0.0.2 ping statistics --- 00:28:02.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:02.606 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:28:02.606 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:02.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:02.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:28:02.606 00:28:02.606 --- 10.0.0.1 ping statistics --- 00:28:02.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:02.606 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:28:02.606 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:02.606 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:28:02.606 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:02.606 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:02.606 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:02.606 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:02.606 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:02.606 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:02.606 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:02.606 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:02.606 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:02.606 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:02.606 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:02.606 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1917268 00:28:02.606 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1917268 00:28:02.606 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:02.607 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 1917268 ']' 00:28:02.607 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:02.607 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:02.607 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:02.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:02.607 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:02.607 01:10:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:02.607 [2024-07-26 01:10:32.905685] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:28:02.607 [2024-07-26 01:10:32.905783] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:02.607 EAL: No free 2048 kB hugepages reported on node 1 00:28:02.607 [2024-07-26 01:10:32.973455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:02.865 [2024-07-26 01:10:33.059716] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:02.865 [2024-07-26 01:10:33.059768] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:02.865 [2024-07-26 01:10:33.059791] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:02.865 [2024-07-26 01:10:33.059802] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:02.865 [2024-07-26 01:10:33.059812] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:02.865 [2024-07-26 01:10:33.059965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:02.865 [2024-07-26 01:10:33.060087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:02.865 [2024-07-26 01:10:33.060114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:02.865 [2024-07-26 01:10:33.060117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:02.865 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:02.865 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:28:02.865 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:02.865 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:02.865 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:02.865 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:02.865 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:02.865 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.865 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:02.865 [2024-07-26 01:10:33.214629] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:02.865 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.865 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:02.865 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.865 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:02.865 Malloc0 00:28:02.865 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.865 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:02.865 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.865 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:02.865 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.865 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:02.865 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.865 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:02.865 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.865 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:02.865 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.865 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:02.865 [2024-07-26 01:10:33.267984] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:02.865 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.865 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:02.865 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.865 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:02.865 [ 00:28:02.865 { 00:28:02.865 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:02.865 "subtype": "Discovery", 00:28:02.865 "listen_addresses": [], 00:28:02.865 "allow_any_host": true, 00:28:02.865 "hosts": [] 00:28:02.865 }, 00:28:02.865 { 00:28:02.865 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:02.865 "subtype": "NVMe", 00:28:02.865 "listen_addresses": [ 00:28:02.865 { 00:28:02.865 "trtype": "TCP", 00:28:02.865 "adrfam": "IPv4", 00:28:02.865 "traddr": "10.0.0.2", 00:28:02.865 "trsvcid": "4420" 00:28:02.865 } 00:28:02.865 ], 00:28:02.865 "allow_any_host": true, 00:28:02.865 "hosts": [], 00:28:02.865 "serial_number": "SPDK00000000000001", 00:28:02.865 "model_number": "SPDK bdev Controller", 00:28:02.865 "max_namespaces": 2, 00:28:02.865 "min_cntlid": 1, 00:28:02.865 "max_cntlid": 65519, 00:28:02.865 "namespaces": [ 00:28:02.865 { 00:28:02.865 "nsid": 1, 00:28:02.865 "bdev_name": "Malloc0", 00:28:02.865 "name": "Malloc0", 00:28:02.865 "nguid": "784ACA6748504553A2F3C062654D855C", 00:28:02.865 "uuid": "784aca67-4850-4553-a2f3-c062654d855c" 00:28:02.865 } 00:28:02.865 ] 00:28:02.865 } 00:28:02.865 ] 00:28:02.865 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.865 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:02.865 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:02.865 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1917376 00:28:02.865 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:02.865 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:02.865 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:28:02.865 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:02.865 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:28:02.865 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:28:02.865 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:28:03.123 EAL: No free 2048 kB hugepages reported on node 1 00:28:03.123 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:03.123 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:28:03.123 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:28:03.123 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:28:03.123 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:03.123 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:03.123 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:28:03.123 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:03.123 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.123 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:03.123 Malloc1 00:28:03.123 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.123 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:03.123 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.123 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:03.380 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.380 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:03.380 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.380 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:03.380 [ 00:28:03.380 { 00:28:03.380 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:03.380 "subtype": "Discovery", 00:28:03.380 "listen_addresses": [], 00:28:03.380 "allow_any_host": true, 00:28:03.380 "hosts": [] 00:28:03.380 }, 00:28:03.380 { 00:28:03.380 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:03.380 "subtype": "NVMe", 00:28:03.380 "listen_addresses": [ 00:28:03.380 { 00:28:03.380 "trtype": "TCP", 00:28:03.380 "adrfam": "IPv4", 00:28:03.380 "traddr": "10.0.0.2", 00:28:03.380 "trsvcid": "4420" 00:28:03.380 } 00:28:03.380 ], 00:28:03.380 "allow_any_host": true, 00:28:03.380 "hosts": [], 00:28:03.380 "serial_number": "SPDK00000000000001", 00:28:03.380 "model_number": "SPDK bdev Controller", 00:28:03.380 "max_namespaces": 2, 00:28:03.380 "min_cntlid": 1, 00:28:03.380 "max_cntlid": 65519, 00:28:03.380 "namespaces": [ 00:28:03.380 { 00:28:03.380 "nsid": 1, 00:28:03.380 "bdev_name": "Malloc0", 00:28:03.380 "name": "Malloc0", 00:28:03.380 "nguid": "784ACA6748504553A2F3C062654D855C", 00:28:03.380 "uuid": "784aca67-4850-4553-a2f3-c062654d855c" 00:28:03.380 }, 00:28:03.380 { 00:28:03.380 "nsid": 2, 00:28:03.380 "bdev_name": "Malloc1", 00:28:03.380 "name": "Malloc1", 00:28:03.380 "nguid": "F59580AF76724EDA91D72059046AD00C", 00:28:03.380 "uuid": "f59580af-7672-4eda-91d7-2059046ad00c" 00:28:03.380 } 00:28:03.380 ] 00:28:03.380 } 00:28:03.380 ] 00:28:03.380 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.380 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1917376 00:28:03.380 Asynchronous Event Request test 00:28:03.380 Attaching to 10.0.0.2 00:28:03.380 Attached to 10.0.0.2 00:28:03.380 Registering asynchronous event callbacks... 00:28:03.380 Starting namespace attribute notice tests for all controllers... 00:28:03.380 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:03.380 aer_cb - Changed Namespace 00:28:03.380 Cleaning up... 00:28:03.380 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:03.380 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.380 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:03.380 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.380 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:03.380 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.380 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:03.380 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.380 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:03.380 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.380 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:03.380 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.380 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:03.380 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:03.380 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:03.380 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:28:03.380 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:03.380 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:28:03.380 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:03.380 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:03.380 rmmod nvme_tcp 00:28:03.380 rmmod nvme_fabrics 00:28:03.380 rmmod nvme_keyring 00:28:03.380 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:03.380 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:28:03.380 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:28:03.380 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1917268 ']' 00:28:03.380 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1917268 00:28:03.380 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 1917268 ']' 00:28:03.380 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 1917268 00:28:03.380 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:28:03.380 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:03.380 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1917268 00:28:03.380 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:03.380 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:03.380 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1917268' 00:28:03.380 killing process with pid 1917268 00:28:03.380 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 1917268 00:28:03.380 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 1917268 00:28:03.638 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:03.638 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:03.638 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:03.638 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:03.638 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:03.638 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.638 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:03.638 01:10:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:06.168 00:28:06.168 real 0m5.234s 00:28:06.168 user 0m4.220s 00:28:06.168 sys 0m1.773s 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.168 ************************************ 00:28:06.168 END TEST nvmf_aer 00:28:06.168 ************************************ 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.168 ************************************ 00:28:06.168 START TEST nvmf_async_init 00:28:06.168 ************************************ 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:06.168 * Looking for test storage... 00:28:06.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=c2269c0a5757408bb464ae4bd9deb406 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:28:06.168 01:10:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:07.544 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:07.544 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:07.544 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:07.803 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:07.803 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:07.803 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:07.803 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:07.803 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:07.803 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:07.803 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:07.803 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:07.803 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:07.803 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:07.803 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:07.803 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:07.804 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:07.804 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:07.804 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:07.804 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:07.804 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:07.804 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:07.804 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:07.804 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:07.804 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:07.804 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:07.804 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:07.804 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:07.804 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:07.804 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:07.804 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:28:07.804 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:07.804 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:07.804 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:07.804 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:07.804 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:07.804 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:07.804 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:07.804 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:07.804 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:07.804 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:07.804 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:07.804 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:07.804 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:07.804 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:07.804 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:07.804 01:10:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:07.804 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:07.804 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:07.804 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:07.804 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:07.804 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:07.804 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:07.804 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:07.804 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:07.804 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:28:07.804 00:28:07.804 --- 10.0.0.2 ping statistics --- 00:28:07.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:07.804 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:28:07.804 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:07.804 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:07.804 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:28:07.804 00:28:07.804 --- 10.0.0.1 ping statistics --- 00:28:07.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:07.804 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:28:07.804 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:07.804 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:28:07.804 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:07.804 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:07.804 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:07.804 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:07.804 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:07.804 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:07.804 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:07.804 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:07.804 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:07.804 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:07.804 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:07.804 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1919308 00:28:07.804 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:07.804 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1919308 00:28:07.804 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 1919308 ']' 00:28:07.804 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:07.804 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:07.804 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:07.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:07.804 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:07.804 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:07.804 [2024-07-26 01:10:38.177267] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:28:07.804 [2024-07-26 01:10:38.177341] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:07.804 EAL: No free 2048 kB hugepages reported on node 1 00:28:08.063 [2024-07-26 01:10:38.243977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:08.063 [2024-07-26 01:10:38.332879] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:08.063 [2024-07-26 01:10:38.332947] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:08.063 [2024-07-26 01:10:38.332964] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:08.063 [2024-07-26 01:10:38.332977] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:08.063 [2024-07-26 01:10:38.332989] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:08.063 [2024-07-26 01:10:38.333019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:08.063 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:08.063 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:28:08.063 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:08.063 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:08.063 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:08.063 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:08.063 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:08.063 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.063 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:08.063 [2024-07-26 01:10:38.484629] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:08.063 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.063 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:08.063 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.063 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:08.321 null0 00:28:08.321 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.321 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:08.321 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.321 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:08.321 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.321 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:08.321 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.321 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:08.321 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.321 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g c2269c0a5757408bb464ae4bd9deb406 00:28:08.321 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.321 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:08.321 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.321 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:08.321 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.321 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:08.321 [2024-07-26 01:10:38.524914] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:08.321 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.321 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:08.321 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.321 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:08.580 nvme0n1 00:28:08.580 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.580 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:08.580 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.580 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:08.580 [ 00:28:08.580 { 00:28:08.580 "name": "nvme0n1", 00:28:08.580 "aliases": [ 00:28:08.580 "c2269c0a-5757-408b-b464-ae4bd9deb406" 00:28:08.580 ], 00:28:08.580 "product_name": "NVMe disk", 00:28:08.580 "block_size": 512, 00:28:08.580 "num_blocks": 2097152, 00:28:08.580 "uuid": "c2269c0a-5757-408b-b464-ae4bd9deb406", 00:28:08.580 "assigned_rate_limits": { 00:28:08.580 "rw_ios_per_sec": 0, 00:28:08.580 "rw_mbytes_per_sec": 0, 00:28:08.580 "r_mbytes_per_sec": 0, 00:28:08.580 "w_mbytes_per_sec": 0 00:28:08.580 }, 00:28:08.580 "claimed": false, 00:28:08.580 "zoned": false, 00:28:08.580 "supported_io_types": { 00:28:08.580 "read": true, 00:28:08.580 "write": true, 00:28:08.580 "unmap": false, 00:28:08.580 "flush": true, 00:28:08.580 "reset": true, 00:28:08.580 "nvme_admin": true, 00:28:08.580 "nvme_io": true, 00:28:08.580 "nvme_io_md": false, 00:28:08.580 "write_zeroes": true, 00:28:08.580 "zcopy": false, 00:28:08.580 "get_zone_info": false, 00:28:08.580 "zone_management": false, 00:28:08.580 "zone_append": false, 00:28:08.580 "compare": true, 00:28:08.580 "compare_and_write": true, 00:28:08.580 "abort": true, 00:28:08.580 "seek_hole": false, 00:28:08.580 "seek_data": false, 00:28:08.580 "copy": true, 00:28:08.580 "nvme_iov_md": false 00:28:08.580 }, 00:28:08.580 "memory_domains": [ 00:28:08.580 { 00:28:08.580 "dma_device_id": "system", 00:28:08.580 "dma_device_type": 1 00:28:08.580 } 00:28:08.580 ], 00:28:08.580 "driver_specific": { 00:28:08.580 "nvme": [ 00:28:08.580 { 00:28:08.580 "trid": { 00:28:08.580 "trtype": "TCP", 00:28:08.580 "adrfam": "IPv4", 00:28:08.580 "traddr": "10.0.0.2", 00:28:08.580 "trsvcid": "4420", 00:28:08.580 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:08.580 }, 00:28:08.580 "ctrlr_data": { 00:28:08.580 "cntlid": 1, 00:28:08.580 "vendor_id": "0x8086", 00:28:08.580 "model_number": "SPDK bdev Controller", 00:28:08.580 "serial_number": "00000000000000000000", 00:28:08.580 "firmware_revision": "24.09", 00:28:08.580 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:08.580 "oacs": { 00:28:08.580 "security": 0, 00:28:08.580 "format": 0, 00:28:08.580 "firmware": 0, 00:28:08.580 "ns_manage": 0 00:28:08.580 }, 00:28:08.580 "multi_ctrlr": true, 00:28:08.580 "ana_reporting": false 00:28:08.580 }, 00:28:08.580 "vs": { 00:28:08.580 "nvme_version": "1.3" 00:28:08.580 }, 00:28:08.580 "ns_data": { 00:28:08.580 "id": 1, 00:28:08.580 "can_share": true 00:28:08.580 } 00:28:08.580 } 00:28:08.580 ], 00:28:08.580 "mp_policy": "active_passive" 00:28:08.580 } 00:28:08.580 } 00:28:08.580 ] 00:28:08.580 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.580 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:08.580 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.580 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:08.580 [2024-07-26 01:10:38.778139] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:08.580 [2024-07-26 01:10:38.778231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1944d20 (9): Bad file descriptor 00:28:08.580 [2024-07-26 01:10:38.920213] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:08.580 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.580 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:08.581 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.581 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:08.581 [ 00:28:08.581 { 00:28:08.581 "name": "nvme0n1", 00:28:08.581 "aliases": [ 00:28:08.581 "c2269c0a-5757-408b-b464-ae4bd9deb406" 00:28:08.581 ], 00:28:08.581 "product_name": "NVMe disk", 00:28:08.581 "block_size": 512, 00:28:08.581 "num_blocks": 2097152, 00:28:08.581 "uuid": "c2269c0a-5757-408b-b464-ae4bd9deb406", 00:28:08.581 "assigned_rate_limits": { 00:28:08.581 "rw_ios_per_sec": 0, 00:28:08.581 "rw_mbytes_per_sec": 0, 00:28:08.581 "r_mbytes_per_sec": 0, 00:28:08.581 "w_mbytes_per_sec": 0 00:28:08.581 }, 00:28:08.581 "claimed": false, 00:28:08.581 "zoned": false, 00:28:08.581 "supported_io_types": { 00:28:08.581 "read": true, 00:28:08.581 "write": true, 00:28:08.581 "unmap": false, 00:28:08.581 "flush": true, 00:28:08.581 "reset": true, 00:28:08.581 "nvme_admin": true, 00:28:08.581 "nvme_io": true, 00:28:08.581 "nvme_io_md": false, 00:28:08.581 "write_zeroes": true, 00:28:08.581 "zcopy": false, 00:28:08.581 "get_zone_info": false, 00:28:08.581 "zone_management": false, 00:28:08.581 "zone_append": false, 00:28:08.581 "compare": true, 00:28:08.581 "compare_and_write": true, 00:28:08.581 "abort": true, 00:28:08.581 "seek_hole": false, 00:28:08.581 "seek_data": false, 00:28:08.581 "copy": true, 00:28:08.581 "nvme_iov_md": false 00:28:08.581 }, 00:28:08.581 "memory_domains": [ 00:28:08.581 { 00:28:08.581 "dma_device_id": "system", 00:28:08.581 "dma_device_type": 1 00:28:08.581 } 00:28:08.581 ], 00:28:08.581 "driver_specific": { 00:28:08.581 "nvme": [ 00:28:08.581 { 00:28:08.581 "trid": { 00:28:08.581 "trtype": "TCP", 00:28:08.581 "adrfam": "IPv4", 00:28:08.581 "traddr": "10.0.0.2", 00:28:08.581 "trsvcid": "4420", 00:28:08.581 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:08.581 }, 00:28:08.581 "ctrlr_data": { 00:28:08.581 "cntlid": 2, 00:28:08.581 "vendor_id": "0x8086", 00:28:08.581 "model_number": "SPDK bdev Controller", 00:28:08.581 "serial_number": "00000000000000000000", 00:28:08.581 "firmware_revision": "24.09", 00:28:08.581 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:08.581 "oacs": { 00:28:08.581 "security": 0, 00:28:08.581 "format": 0, 00:28:08.581 "firmware": 0, 00:28:08.581 "ns_manage": 0 00:28:08.581 }, 00:28:08.581 "multi_ctrlr": true, 00:28:08.581 "ana_reporting": false 00:28:08.581 }, 00:28:08.581 "vs": { 00:28:08.581 "nvme_version": "1.3" 00:28:08.581 }, 00:28:08.581 "ns_data": { 00:28:08.581 "id": 1, 00:28:08.581 "can_share": true 00:28:08.581 } 00:28:08.581 } 00:28:08.581 ], 00:28:08.581 "mp_policy": "active_passive" 00:28:08.581 } 00:28:08.581 } 00:28:08.581 ] 00:28:08.581 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.581 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.581 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.581 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:08.581 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.581 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:28:08.581 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.4xLehCYXa5 00:28:08.581 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:08.581 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.4xLehCYXa5 00:28:08.581 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:08.581 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.581 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:08.581 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.581 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:08.581 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.581 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:08.581 [2024-07-26 01:10:38.970860] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:08.581 [2024-07-26 01:10:38.970991] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:08.581 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.581 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.4xLehCYXa5 00:28:08.581 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.581 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:08.581 [2024-07-26 01:10:38.978874] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:08.581 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.581 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.4xLehCYXa5 00:28:08.581 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.581 01:10:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:08.581 [2024-07-26 01:10:38.986904] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:08.581 [2024-07-26 01:10:38.986966] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:28:08.840 nvme0n1 00:28:08.840 01:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.840 01:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:08.840 01:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.840 01:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:08.840 [ 00:28:08.840 { 00:28:08.840 "name": "nvme0n1", 00:28:08.840 "aliases": [ 00:28:08.840 "c2269c0a-5757-408b-b464-ae4bd9deb406" 00:28:08.840 ], 00:28:08.840 "product_name": "NVMe disk", 00:28:08.840 "block_size": 512, 00:28:08.840 "num_blocks": 2097152, 00:28:08.840 "uuid": "c2269c0a-5757-408b-b464-ae4bd9deb406", 00:28:08.840 "assigned_rate_limits": { 00:28:08.840 "rw_ios_per_sec": 0, 00:28:08.840 "rw_mbytes_per_sec": 0, 00:28:08.840 "r_mbytes_per_sec": 0, 00:28:08.840 "w_mbytes_per_sec": 0 00:28:08.840 }, 00:28:08.840 "claimed": false, 00:28:08.840 "zoned": false, 00:28:08.840 "supported_io_types": { 00:28:08.840 "read": true, 00:28:08.840 "write": true, 00:28:08.840 "unmap": false, 00:28:08.840 "flush": true, 00:28:08.840 "reset": true, 00:28:08.840 "nvme_admin": true, 00:28:08.840 "nvme_io": true, 00:28:08.840 "nvme_io_md": false, 00:28:08.840 "write_zeroes": true, 00:28:08.840 "zcopy": false, 00:28:08.840 "get_zone_info": false, 00:28:08.840 "zone_management": false, 00:28:08.840 "zone_append": false, 00:28:08.840 "compare": true, 00:28:08.840 "compare_and_write": true, 00:28:08.840 "abort": true, 00:28:08.840 "seek_hole": false, 00:28:08.840 "seek_data": false, 00:28:08.840 "copy": true, 00:28:08.840 "nvme_iov_md": false 00:28:08.840 }, 00:28:08.840 "memory_domains": [ 00:28:08.840 { 00:28:08.840 "dma_device_id": "system", 00:28:08.840 "dma_device_type": 1 00:28:08.840 } 00:28:08.840 ], 00:28:08.840 "driver_specific": { 00:28:08.840 "nvme": [ 00:28:08.840 { 00:28:08.840 "trid": { 00:28:08.840 "trtype": "TCP", 00:28:08.840 "adrfam": "IPv4", 00:28:08.840 "traddr": "10.0.0.2", 00:28:08.840 "trsvcid": "4421", 00:28:08.840 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:08.840 }, 00:28:08.840 "ctrlr_data": { 00:28:08.840 "cntlid": 3, 00:28:08.840 "vendor_id": "0x8086", 00:28:08.840 "model_number": "SPDK bdev Controller", 00:28:08.840 "serial_number": "00000000000000000000", 00:28:08.840 "firmware_revision": "24.09", 00:28:08.840 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:08.840 "oacs": { 00:28:08.840 "security": 0, 00:28:08.840 "format": 0, 00:28:08.840 "firmware": 0, 00:28:08.840 "ns_manage": 0 00:28:08.840 }, 00:28:08.840 "multi_ctrlr": true, 00:28:08.840 "ana_reporting": false 00:28:08.840 }, 00:28:08.840 "vs": { 00:28:08.840 "nvme_version": "1.3" 00:28:08.840 }, 00:28:08.840 "ns_data": { 00:28:08.840 "id": 1, 00:28:08.840 "can_share": true 00:28:08.840 } 00:28:08.840 } 00:28:08.840 ], 00:28:08.840 "mp_policy": "active_passive" 00:28:08.840 } 00:28:08.840 } 00:28:08.840 ] 00:28:08.840 01:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.841 01:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.841 01:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.841 01:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:08.841 01:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.841 01:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.4xLehCYXa5 00:28:08.841 01:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:08.841 01:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:28:08.841 01:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:08.841 01:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:28:08.841 01:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:08.841 01:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:28:08.841 01:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:08.841 01:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:08.841 rmmod nvme_tcp 00:28:08.841 rmmod nvme_fabrics 00:28:08.841 rmmod nvme_keyring 00:28:08.841 01:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:08.841 01:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:28:08.841 01:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:28:08.841 01:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1919308 ']' 00:28:08.841 01:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1919308 00:28:08.841 01:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 1919308 ']' 00:28:08.841 01:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 1919308 00:28:08.841 01:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:28:08.841 01:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:08.841 01:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1919308 00:28:08.841 01:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:08.841 01:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:08.841 01:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1919308' 00:28:08.841 killing process with pid 1919308 00:28:08.841 01:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 1919308 00:28:08.841 [2024-07-26 01:10:39.174366] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:28:08.841 [2024-07-26 01:10:39.174405] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:08.841 01:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 1919308 00:28:09.099 01:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:09.099 01:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:09.099 01:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:09.099 01:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:09.099 01:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:09.099 01:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.099 01:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:09.099 01:10:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.002 01:10:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:11.002 00:28:11.002 real 0m5.357s 00:28:11.002 user 0m1.971s 00:28:11.002 sys 0m1.756s 00:28:11.002 01:10:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:11.002 01:10:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:11.002 ************************************ 00:28:11.002 END TEST nvmf_async_init 00:28:11.002 ************************************ 00:28:11.261 01:10:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:11.261 01:10:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:11.261 01:10:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:11.261 01:10:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.261 ************************************ 00:28:11.261 START TEST dma 00:28:11.261 ************************************ 00:28:11.261 01:10:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:11.261 * Looking for test storage... 00:28:11.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:11.261 01:10:41 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:11.261 01:10:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:28:11.261 01:10:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:11.261 01:10:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:11.261 01:10:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:11.261 01:10:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:11.261 01:10:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:11.261 01:10:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:11.261 01:10:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:11.261 01:10:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:11.261 01:10:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:11.261 01:10:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:11.261 01:10:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:11.261 01:10:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:11.261 01:10:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:11.261 01:10:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:11.261 01:10:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:11.261 01:10:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:11.261 01:10:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:11.261 01:10:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:11.261 01:10:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:11.261 01:10:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:11.261 01:10:41 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.261 01:10:41 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:28:11.262 00:28:11.262 real 0m0.073s 00:28:11.262 user 0m0.040s 00:28:11.262 sys 0m0.038s 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:28:11.262 ************************************ 00:28:11.262 END TEST dma 00:28:11.262 ************************************ 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.262 ************************************ 00:28:11.262 START TEST nvmf_identify 00:28:11.262 ************************************ 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:11.262 * Looking for test storage... 00:28:11.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:28:11.262 01:10:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:13.191 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:13.191 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:13.191 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:13.191 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:13.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:13.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:28:13.191 00:28:13.191 --- 10.0.0.2 ping statistics --- 00:28:13.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:13.191 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:13.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:13.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:28:13.191 00:28:13.191 --- 10.0.0.1 ping statistics --- 00:28:13.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:13.191 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:13.191 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:13.449 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:13.449 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:13.449 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:13.449 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1921428 00:28:13.449 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:13.449 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:13.449 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1921428 00:28:13.449 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 1921428 ']' 00:28:13.449 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:13.449 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:13.449 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:13.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:13.449 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:13.449 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:13.449 [2024-07-26 01:10:43.677145] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:28:13.449 [2024-07-26 01:10:43.677224] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:13.449 EAL: No free 2048 kB hugepages reported on node 1 00:28:13.449 [2024-07-26 01:10:43.745402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:13.449 [2024-07-26 01:10:43.836845] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:13.449 [2024-07-26 01:10:43.836907] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:13.449 [2024-07-26 01:10:43.836924] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:13.449 [2024-07-26 01:10:43.836937] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:13.449 [2024-07-26 01:10:43.836954] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:13.449 [2024-07-26 01:10:43.837025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:13.449 [2024-07-26 01:10:43.837094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:13.449 [2024-07-26 01:10:43.837135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:13.449 [2024-07-26 01:10:43.837137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:13.706 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:13.706 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:28:13.706 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:13.706 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.706 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:13.706 [2024-07-26 01:10:43.971463] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:13.706 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.706 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:13.706 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:13.706 01:10:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:13.706 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:13.706 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.706 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:13.706 Malloc0 00:28:13.706 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.706 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:13.706 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.706 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:13.706 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.706 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:13.707 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.707 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:13.707 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.707 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:13.707 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.707 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:13.707 [2024-07-26 01:10:44.053122] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:13.707 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.707 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:13.707 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.707 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:13.707 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.707 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:13.707 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.707 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:13.707 [ 00:28:13.707 { 00:28:13.707 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:13.707 "subtype": "Discovery", 00:28:13.707 "listen_addresses": [ 00:28:13.707 { 00:28:13.707 "trtype": "TCP", 00:28:13.707 "adrfam": "IPv4", 00:28:13.707 "traddr": "10.0.0.2", 00:28:13.707 "trsvcid": "4420" 00:28:13.707 } 00:28:13.707 ], 00:28:13.707 "allow_any_host": true, 00:28:13.707 "hosts": [] 00:28:13.707 }, 00:28:13.707 { 00:28:13.707 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:13.707 "subtype": "NVMe", 00:28:13.707 "listen_addresses": [ 00:28:13.707 { 00:28:13.707 "trtype": "TCP", 00:28:13.707 "adrfam": "IPv4", 00:28:13.707 "traddr": "10.0.0.2", 00:28:13.707 "trsvcid": "4420" 00:28:13.707 } 00:28:13.707 ], 00:28:13.707 "allow_any_host": true, 00:28:13.707 "hosts": [], 00:28:13.707 "serial_number": "SPDK00000000000001", 00:28:13.707 "model_number": "SPDK bdev Controller", 00:28:13.707 "max_namespaces": 32, 00:28:13.707 "min_cntlid": 1, 00:28:13.707 "max_cntlid": 65519, 00:28:13.707 "namespaces": [ 00:28:13.707 { 00:28:13.707 "nsid": 1, 00:28:13.707 "bdev_name": "Malloc0", 00:28:13.707 "name": "Malloc0", 00:28:13.707 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:13.707 "eui64": "ABCDEF0123456789", 00:28:13.707 "uuid": "0ace8ee5-952b-49dc-9854-d527c3b4e758" 00:28:13.707 } 00:28:13.707 ] 00:28:13.707 } 00:28:13.707 ] 00:28:13.707 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.707 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:13.707 [2024-07-26 01:10:44.092297] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:28:13.707 [2024-07-26 01:10:44.092337] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1921455 ] 00:28:13.707 EAL: No free 2048 kB hugepages reported on node 1 00:28:13.707 [2024-07-26 01:10:44.126435] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:28:13.707 [2024-07-26 01:10:44.126493] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:13.707 [2024-07-26 01:10:44.126502] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:13.707 [2024-07-26 01:10:44.126518] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:13.707 [2024-07-26 01:10:44.126530] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:13.707 [2024-07-26 01:10:44.126811] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:28:13.707 [2024-07-26 01:10:44.126863] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xac9ae0 0 00:28:13.707 [2024-07-26 01:10:44.133391] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:13.707 [2024-07-26 01:10:44.133419] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:13.707 [2024-07-26 01:10:44.133429] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:13.707 [2024-07-26 01:10:44.133436] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:13.707 [2024-07-26 01:10:44.133487] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:13.707 [2024-07-26 01:10:44.133499] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:13.707 [2024-07-26 01:10:44.133507] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xac9ae0) 00:28:13.707 [2024-07-26 01:10:44.133526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:14.024 [2024-07-26 01:10:44.133551] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb20240, cid 0, qid 0 00:28:14.024 [2024-07-26 01:10:44.141086] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.024 [2024-07-26 01:10:44.141104] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.024 [2024-07-26 01:10:44.141112] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.024 [2024-07-26 01:10:44.141119] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb20240) on tqpair=0xac9ae0 00:28:14.024 [2024-07-26 01:10:44.141134] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:14.024 [2024-07-26 01:10:44.141145] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:28:14.024 [2024-07-26 01:10:44.141162] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:28:14.024 [2024-07-26 01:10:44.141182] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.024 [2024-07-26 01:10:44.141191] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.024 [2024-07-26 01:10:44.141198] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xac9ae0) 00:28:14.024 [2024-07-26 01:10:44.141209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.024 [2024-07-26 01:10:44.141231] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb20240, cid 0, qid 0 00:28:14.024 [2024-07-26 01:10:44.141415] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.024 [2024-07-26 01:10:44.141431] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.024 [2024-07-26 01:10:44.141438] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.024 [2024-07-26 01:10:44.141445] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb20240) on tqpair=0xac9ae0 00:28:14.024 [2024-07-26 01:10:44.141458] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:28:14.024 [2024-07-26 01:10:44.141472] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:28:14.024 [2024-07-26 01:10:44.141484] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.024 [2024-07-26 01:10:44.141492] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.024 [2024-07-26 01:10:44.141498] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xac9ae0) 00:28:14.024 [2024-07-26 01:10:44.141508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.024 [2024-07-26 01:10:44.141529] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb20240, cid 0, qid 0 00:28:14.024 [2024-07-26 01:10:44.141744] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.024 [2024-07-26 01:10:44.141759] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.024 [2024-07-26 01:10:44.141766] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.024 [2024-07-26 01:10:44.141773] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb20240) on tqpair=0xac9ae0 00:28:14.024 [2024-07-26 01:10:44.141781] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:28:14.024 [2024-07-26 01:10:44.141795] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:28:14.024 [2024-07-26 01:10:44.141808] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.024 [2024-07-26 01:10:44.141815] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.024 [2024-07-26 01:10:44.141821] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xac9ae0) 00:28:14.024 [2024-07-26 01:10:44.141831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.024 [2024-07-26 01:10:44.141852] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb20240, cid 0, qid 0 00:28:14.024 [2024-07-26 01:10:44.141974] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.024 [2024-07-26 01:10:44.141986] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.024 [2024-07-26 01:10:44.141993] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.024 [2024-07-26 01:10:44.141999] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb20240) on tqpair=0xac9ae0 00:28:14.024 [2024-07-26 01:10:44.142008] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:14.024 [2024-07-26 01:10:44.142024] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.024 [2024-07-26 01:10:44.142037] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.024 [2024-07-26 01:10:44.142066] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xac9ae0) 00:28:14.024 [2024-07-26 01:10:44.142079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.024 [2024-07-26 01:10:44.142101] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb20240, cid 0, qid 0 00:28:14.024 [2024-07-26 01:10:44.142210] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.024 [2024-07-26 01:10:44.142226] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.024 [2024-07-26 01:10:44.142233] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.024 [2024-07-26 01:10:44.142240] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb20240) on tqpair=0xac9ae0 00:28:14.024 [2024-07-26 01:10:44.142248] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:28:14.024 [2024-07-26 01:10:44.142257] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:28:14.024 [2024-07-26 01:10:44.142270] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:14.024 [2024-07-26 01:10:44.142381] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:28:14.024 [2024-07-26 01:10:44.142390] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:14.024 [2024-07-26 01:10:44.142404] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.024 [2024-07-26 01:10:44.142411] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.024 [2024-07-26 01:10:44.142418] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xac9ae0) 00:28:14.024 [2024-07-26 01:10:44.142443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.024 [2024-07-26 01:10:44.142463] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb20240, cid 0, qid 0 00:28:14.024 [2024-07-26 01:10:44.142629] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.024 [2024-07-26 01:10:44.142644] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.024 [2024-07-26 01:10:44.142651] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.024 [2024-07-26 01:10:44.142658] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb20240) on tqpair=0xac9ae0 00:28:14.025 [2024-07-26 01:10:44.142666] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:14.025 [2024-07-26 01:10:44.142683] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.025 [2024-07-26 01:10:44.142692] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.025 [2024-07-26 01:10:44.142698] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xac9ae0) 00:28:14.025 [2024-07-26 01:10:44.142708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.025 [2024-07-26 01:10:44.142728] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb20240, cid 0, qid 0 00:28:14.025 [2024-07-26 01:10:44.142826] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.025 [2024-07-26 01:10:44.142841] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.025 [2024-07-26 01:10:44.142848] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.025 [2024-07-26 01:10:44.142854] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb20240) on tqpair=0xac9ae0 00:28:14.025 [2024-07-26 01:10:44.142862] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:14.025 [2024-07-26 01:10:44.142874] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:28:14.025 [2024-07-26 01:10:44.142887] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:28:14.025 [2024-07-26 01:10:44.142906] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:28:14.025 [2024-07-26 01:10:44.142921] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.025 [2024-07-26 01:10:44.142928] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xac9ae0) 00:28:14.025 [2024-07-26 01:10:44.142939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.025 [2024-07-26 01:10:44.142959] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb20240, cid 0, qid 0 00:28:14.025 [2024-07-26 01:10:44.143126] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:14.025 [2024-07-26 01:10:44.143142] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:14.025 [2024-07-26 01:10:44.143150] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:14.025 [2024-07-26 01:10:44.143157] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xac9ae0): datao=0, datal=4096, cccid=0 00:28:14.025 [2024-07-26 01:10:44.143165] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb20240) on tqpair(0xac9ae0): expected_datao=0, payload_size=4096 00:28:14.025 [2024-07-26 01:10:44.143173] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.025 [2024-07-26 01:10:44.143190] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:14.025 [2024-07-26 01:10:44.143200] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:14.025 [2024-07-26 01:10:44.185090] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.025 [2024-07-26 01:10:44.185111] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.025 [2024-07-26 01:10:44.185119] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.025 [2024-07-26 01:10:44.185127] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb20240) on tqpair=0xac9ae0 00:28:14.025 [2024-07-26 01:10:44.185138] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:28:14.025 [2024-07-26 01:10:44.185147] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:28:14.025 [2024-07-26 01:10:44.185156] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:28:14.025 [2024-07-26 01:10:44.185165] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:28:14.025 [2024-07-26 01:10:44.185172] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:28:14.025 [2024-07-26 01:10:44.185180] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:28:14.025 [2024-07-26 01:10:44.185196] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:28:14.025 [2024-07-26 01:10:44.185214] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.025 [2024-07-26 01:10:44.185223] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.025 [2024-07-26 01:10:44.185231] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xac9ae0) 00:28:14.025 [2024-07-26 01:10:44.185243] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:14.025 [2024-07-26 01:10:44.185266] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb20240, cid 0, qid 0 00:28:14.025 [2024-07-26 01:10:44.185431] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.025 [2024-07-26 01:10:44.185449] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.025 [2024-07-26 01:10:44.185457] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.025 [2024-07-26 01:10:44.185464] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb20240) on tqpair=0xac9ae0 00:28:14.025 [2024-07-26 01:10:44.185475] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.025 [2024-07-26 01:10:44.185482] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.025 [2024-07-26 01:10:44.185489] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xac9ae0) 00:28:14.025 [2024-07-26 01:10:44.185498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:14.025 [2024-07-26 01:10:44.185508] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.025 [2024-07-26 01:10:44.185515] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.025 [2024-07-26 01:10:44.185521] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xac9ae0) 00:28:14.025 [2024-07-26 01:10:44.185529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:14.025 [2024-07-26 01:10:44.185539] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.025 [2024-07-26 01:10:44.185545] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.025 [2024-07-26 01:10:44.185551] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xac9ae0) 00:28:14.025 [2024-07-26 01:10:44.185560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:14.025 [2024-07-26 01:10:44.185569] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.025 [2024-07-26 01:10:44.185589] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.025 [2024-07-26 01:10:44.185596] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac9ae0) 00:28:14.025 [2024-07-26 01:10:44.185604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:14.025 [2024-07-26 01:10:44.185613] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:28:14.025 [2024-07-26 01:10:44.185631] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:14.025 [2024-07-26 01:10:44.185644] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.025 [2024-07-26 01:10:44.185651] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xac9ae0) 00:28:14.025 [2024-07-26 01:10:44.185660] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.025 [2024-07-26 01:10:44.185682] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb20240, cid 0, qid 0 00:28:14.025 [2024-07-26 01:10:44.185708] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb203c0, cid 1, qid 0 00:28:14.025 [2024-07-26 01:10:44.185715] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb20540, cid 2, qid 0 00:28:14.025 [2024-07-26 01:10:44.185723] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb206c0, cid 3, qid 0 00:28:14.025 [2024-07-26 01:10:44.185730] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb20840, cid 4, qid 0 00:28:14.025 [2024-07-26 01:10:44.185893] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.025 [2024-07-26 01:10:44.185908] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.025 [2024-07-26 01:10:44.185915] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.025 [2024-07-26 01:10:44.185922] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb20840) on tqpair=0xac9ae0 00:28:14.025 [2024-07-26 01:10:44.185930] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:28:14.025 [2024-07-26 01:10:44.185943] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:28:14.025 [2024-07-26 01:10:44.185961] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.025 [2024-07-26 01:10:44.185970] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xac9ae0) 00:28:14.025 [2024-07-26 01:10:44.185981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.025 [2024-07-26 01:10:44.186015] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb20840, cid 4, qid 0 00:28:14.025 [2024-07-26 01:10:44.186223] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:14.025 [2024-07-26 01:10:44.186237] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:14.025 [2024-07-26 01:10:44.186245] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:14.025 [2024-07-26 01:10:44.186251] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xac9ae0): datao=0, datal=4096, cccid=4 00:28:14.025 [2024-07-26 01:10:44.186259] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb20840) on tqpair(0xac9ae0): expected_datao=0, payload_size=4096 00:28:14.025 [2024-07-26 01:10:44.186267] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.025 [2024-07-26 01:10:44.186283] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:14.025 [2024-07-26 01:10:44.186293] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:14.025 [2024-07-26 01:10:44.186352] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.025 [2024-07-26 01:10:44.186364] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.025 [2024-07-26 01:10:44.186371] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.025 [2024-07-26 01:10:44.186378] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb20840) on tqpair=0xac9ae0 00:28:14.025 [2024-07-26 01:10:44.186412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:28:14.025 [2024-07-26 01:10:44.186447] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.025 [2024-07-26 01:10:44.186457] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xac9ae0) 00:28:14.026 [2024-07-26 01:10:44.186467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.026 [2024-07-26 01:10:44.186478] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.026 [2024-07-26 01:10:44.186486] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.026 [2024-07-26 01:10:44.186492] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xac9ae0) 00:28:14.026 [2024-07-26 01:10:44.186501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:14.026 [2024-07-26 01:10:44.186527] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb20840, cid 4, qid 0 00:28:14.026 [2024-07-26 01:10:44.186538] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb209c0, cid 5, qid 0 00:28:14.026 [2024-07-26 01:10:44.186710] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:14.026 [2024-07-26 01:10:44.186725] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:14.026 [2024-07-26 01:10:44.186732] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:14.026 [2024-07-26 01:10:44.186738] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xac9ae0): datao=0, datal=1024, cccid=4 00:28:14.026 [2024-07-26 01:10:44.186746] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb20840) on tqpair(0xac9ae0): expected_datao=0, payload_size=1024 00:28:14.026 [2024-07-26 01:10:44.186754] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.026 [2024-07-26 01:10:44.186763] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:14.026 [2024-07-26 01:10:44.186774] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:14.026 [2024-07-26 01:10:44.186783] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.026 [2024-07-26 01:10:44.186792] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.026 [2024-07-26 01:10:44.186799] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.026 [2024-07-26 01:10:44.186805] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb209c0) on tqpair=0xac9ae0 00:28:14.026 [2024-07-26 01:10:44.228218] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.026 [2024-07-26 01:10:44.228239] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.026 [2024-07-26 01:10:44.228247] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.026 [2024-07-26 01:10:44.228254] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb20840) on tqpair=0xac9ae0 00:28:14.026 [2024-07-26 01:10:44.228271] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.026 [2024-07-26 01:10:44.228279] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xac9ae0) 00:28:14.026 [2024-07-26 01:10:44.228291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.026 [2024-07-26 01:10:44.228321] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb20840, cid 4, qid 0 00:28:14.026 [2024-07-26 01:10:44.228457] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:14.026 [2024-07-26 01:10:44.228473] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:14.026 [2024-07-26 01:10:44.228481] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:14.026 [2024-07-26 01:10:44.228487] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xac9ae0): datao=0, datal=3072, cccid=4 00:28:14.026 [2024-07-26 01:10:44.228495] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb20840) on tqpair(0xac9ae0): expected_datao=0, payload_size=3072 00:28:14.026 [2024-07-26 01:10:44.228502] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.026 [2024-07-26 01:10:44.228512] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:14.026 [2024-07-26 01:10:44.228520] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:14.026 [2024-07-26 01:10:44.228532] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.026 [2024-07-26 01:10:44.228541] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.026 [2024-07-26 01:10:44.228548] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.026 [2024-07-26 01:10:44.228554] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb20840) on tqpair=0xac9ae0 00:28:14.026 [2024-07-26 01:10:44.228569] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.026 [2024-07-26 01:10:44.228577] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xac9ae0) 00:28:14.026 [2024-07-26 01:10:44.228587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.026 [2024-07-26 01:10:44.228614] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb20840, cid 4, qid 0 00:28:14.026 [2024-07-26 01:10:44.228732] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:14.026 [2024-07-26 01:10:44.228744] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:14.026 [2024-07-26 01:10:44.228751] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:14.026 [2024-07-26 01:10:44.228757] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xac9ae0): datao=0, datal=8, cccid=4 00:28:14.026 [2024-07-26 01:10:44.228765] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb20840) on tqpair(0xac9ae0): expected_datao=0, payload_size=8 00:28:14.026 [2024-07-26 01:10:44.228772] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.026 [2024-07-26 01:10:44.228781] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:14.026 [2024-07-26 01:10:44.228788] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:14.026 [2024-07-26 01:10:44.273083] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.026 [2024-07-26 01:10:44.273122] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.026 [2024-07-26 01:10:44.273132] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.026 [2024-07-26 01:10:44.273139] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb20840) on tqpair=0xac9ae0 00:28:14.026 ===================================================== 00:28:14.026 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:14.026 ===================================================== 00:28:14.026 Controller Capabilities/Features 00:28:14.026 ================================ 00:28:14.026 Vendor ID: 0000 00:28:14.026 Subsystem Vendor ID: 0000 00:28:14.026 Serial Number: .................... 00:28:14.026 Model Number: ........................................ 00:28:14.026 Firmware Version: 24.09 00:28:14.026 Recommended Arb Burst: 0 00:28:14.026 IEEE OUI Identifier: 00 00 00 00:28:14.026 Multi-path I/O 00:28:14.026 May have multiple subsystem ports: No 00:28:14.026 May have multiple controllers: No 00:28:14.026 Associated with SR-IOV VF: No 00:28:14.026 Max Data Transfer Size: 131072 00:28:14.026 Max Number of Namespaces: 0 00:28:14.026 Max Number of I/O Queues: 1024 00:28:14.026 NVMe Specification Version (VS): 1.3 00:28:14.026 NVMe Specification Version (Identify): 1.3 00:28:14.026 Maximum Queue Entries: 128 00:28:14.026 Contiguous Queues Required: Yes 00:28:14.026 Arbitration Mechanisms Supported 00:28:14.026 Weighted Round Robin: Not Supported 00:28:14.026 Vendor Specific: Not Supported 00:28:14.026 Reset Timeout: 15000 ms 00:28:14.026 Doorbell Stride: 4 bytes 00:28:14.026 NVM Subsystem Reset: Not Supported 00:28:14.026 Command Sets Supported 00:28:14.026 NVM Command Set: Supported 00:28:14.026 Boot Partition: Not Supported 00:28:14.026 Memory Page Size Minimum: 4096 bytes 00:28:14.026 Memory Page Size Maximum: 4096 bytes 00:28:14.026 Persistent Memory Region: Not Supported 00:28:14.026 Optional Asynchronous Events Supported 00:28:14.026 Namespace Attribute Notices: Not Supported 00:28:14.026 Firmware Activation Notices: Not Supported 00:28:14.026 ANA Change Notices: Not Supported 00:28:14.026 PLE Aggregate Log Change Notices: Not Supported 00:28:14.026 LBA Status Info Alert Notices: Not Supported 00:28:14.026 EGE Aggregate Log Change Notices: Not Supported 00:28:14.026 Normal NVM Subsystem Shutdown event: Not Supported 00:28:14.026 Zone Descriptor Change Notices: Not Supported 00:28:14.026 Discovery Log Change Notices: Supported 00:28:14.026 Controller Attributes 00:28:14.026 128-bit Host Identifier: Not Supported 00:28:14.026 Non-Operational Permissive Mode: Not Supported 00:28:14.026 NVM Sets: Not Supported 00:28:14.026 Read Recovery Levels: Not Supported 00:28:14.026 Endurance Groups: Not Supported 00:28:14.026 Predictable Latency Mode: Not Supported 00:28:14.026 Traffic Based Keep ALive: Not Supported 00:28:14.026 Namespace Granularity: Not Supported 00:28:14.026 SQ Associations: Not Supported 00:28:14.026 UUID List: Not Supported 00:28:14.026 Multi-Domain Subsystem: Not Supported 00:28:14.026 Fixed Capacity Management: Not Supported 00:28:14.026 Variable Capacity Management: Not Supported 00:28:14.026 Delete Endurance Group: Not Supported 00:28:14.026 Delete NVM Set: Not Supported 00:28:14.026 Extended LBA Formats Supported: Not Supported 00:28:14.026 Flexible Data Placement Supported: Not Supported 00:28:14.026 00:28:14.026 Controller Memory Buffer Support 00:28:14.026 ================================ 00:28:14.026 Supported: No 00:28:14.026 00:28:14.026 Persistent Memory Region Support 00:28:14.026 ================================ 00:28:14.026 Supported: No 00:28:14.026 00:28:14.026 Admin Command Set Attributes 00:28:14.026 ============================ 00:28:14.026 Security Send/Receive: Not Supported 00:28:14.026 Format NVM: Not Supported 00:28:14.026 Firmware Activate/Download: Not Supported 00:28:14.026 Namespace Management: Not Supported 00:28:14.026 Device Self-Test: Not Supported 00:28:14.026 Directives: Not Supported 00:28:14.026 NVMe-MI: Not Supported 00:28:14.026 Virtualization Management: Not Supported 00:28:14.026 Doorbell Buffer Config: Not Supported 00:28:14.026 Get LBA Status Capability: Not Supported 00:28:14.026 Command & Feature Lockdown Capability: Not Supported 00:28:14.026 Abort Command Limit: 1 00:28:14.026 Async Event Request Limit: 4 00:28:14.027 Number of Firmware Slots: N/A 00:28:14.027 Firmware Slot 1 Read-Only: N/A 00:28:14.027 Firmware Activation Without Reset: N/A 00:28:14.027 Multiple Update Detection Support: N/A 00:28:14.027 Firmware Update Granularity: No Information Provided 00:28:14.027 Per-Namespace SMART Log: No 00:28:14.027 Asymmetric Namespace Access Log Page: Not Supported 00:28:14.027 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:14.027 Command Effects Log Page: Not Supported 00:28:14.027 Get Log Page Extended Data: Supported 00:28:14.027 Telemetry Log Pages: Not Supported 00:28:14.027 Persistent Event Log Pages: Not Supported 00:28:14.027 Supported Log Pages Log Page: May Support 00:28:14.027 Commands Supported & Effects Log Page: Not Supported 00:28:14.027 Feature Identifiers & Effects Log Page:May Support 00:28:14.027 NVMe-MI Commands & Effects Log Page: May Support 00:28:14.027 Data Area 4 for Telemetry Log: Not Supported 00:28:14.027 Error Log Page Entries Supported: 128 00:28:14.027 Keep Alive: Not Supported 00:28:14.027 00:28:14.027 NVM Command Set Attributes 00:28:14.027 ========================== 00:28:14.027 Submission Queue Entry Size 00:28:14.027 Max: 1 00:28:14.027 Min: 1 00:28:14.027 Completion Queue Entry Size 00:28:14.027 Max: 1 00:28:14.027 Min: 1 00:28:14.027 Number of Namespaces: 0 00:28:14.027 Compare Command: Not Supported 00:28:14.027 Write Uncorrectable Command: Not Supported 00:28:14.027 Dataset Management Command: Not Supported 00:28:14.027 Write Zeroes Command: Not Supported 00:28:14.027 Set Features Save Field: Not Supported 00:28:14.027 Reservations: Not Supported 00:28:14.027 Timestamp: Not Supported 00:28:14.027 Copy: Not Supported 00:28:14.027 Volatile Write Cache: Not Present 00:28:14.027 Atomic Write Unit (Normal): 1 00:28:14.027 Atomic Write Unit (PFail): 1 00:28:14.027 Atomic Compare & Write Unit: 1 00:28:14.027 Fused Compare & Write: Supported 00:28:14.027 Scatter-Gather List 00:28:14.027 SGL Command Set: Supported 00:28:14.027 SGL Keyed: Supported 00:28:14.027 SGL Bit Bucket Descriptor: Not Supported 00:28:14.027 SGL Metadata Pointer: Not Supported 00:28:14.027 Oversized SGL: Not Supported 00:28:14.027 SGL Metadata Address: Not Supported 00:28:14.027 SGL Offset: Supported 00:28:14.027 Transport SGL Data Block: Not Supported 00:28:14.027 Replay Protected Memory Block: Not Supported 00:28:14.027 00:28:14.027 Firmware Slot Information 00:28:14.027 ========================= 00:28:14.027 Active slot: 0 00:28:14.027 00:28:14.027 00:28:14.027 Error Log 00:28:14.027 ========= 00:28:14.027 00:28:14.027 Active Namespaces 00:28:14.027 ================= 00:28:14.027 Discovery Log Page 00:28:14.027 ================== 00:28:14.027 Generation Counter: 2 00:28:14.027 Number of Records: 2 00:28:14.027 Record Format: 0 00:28:14.027 00:28:14.027 Discovery Log Entry 0 00:28:14.027 ---------------------- 00:28:14.027 Transport Type: 3 (TCP) 00:28:14.027 Address Family: 1 (IPv4) 00:28:14.027 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:14.027 Entry Flags: 00:28:14.027 Duplicate Returned Information: 1 00:28:14.027 Explicit Persistent Connection Support for Discovery: 1 00:28:14.027 Transport Requirements: 00:28:14.027 Secure Channel: Not Required 00:28:14.027 Port ID: 0 (0x0000) 00:28:14.027 Controller ID: 65535 (0xffff) 00:28:14.027 Admin Max SQ Size: 128 00:28:14.027 Transport Service Identifier: 4420 00:28:14.027 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:14.027 Transport Address: 10.0.0.2 00:28:14.027 Discovery Log Entry 1 00:28:14.027 ---------------------- 00:28:14.027 Transport Type: 3 (TCP) 00:28:14.027 Address Family: 1 (IPv4) 00:28:14.027 Subsystem Type: 2 (NVM Subsystem) 00:28:14.027 Entry Flags: 00:28:14.027 Duplicate Returned Information: 0 00:28:14.027 Explicit Persistent Connection Support for Discovery: 0 00:28:14.027 Transport Requirements: 00:28:14.027 Secure Channel: Not Required 00:28:14.027 Port ID: 0 (0x0000) 00:28:14.027 Controller ID: 65535 (0xffff) 00:28:14.027 Admin Max SQ Size: 128 00:28:14.027 Transport Service Identifier: 4420 00:28:14.027 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:14.027 Transport Address: 10.0.0.2 [2024-07-26 01:10:44.273256] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:28:14.027 [2024-07-26 01:10:44.273278] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb20240) on tqpair=0xac9ae0 00:28:14.027 [2024-07-26 01:10:44.273290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.027 [2024-07-26 01:10:44.273300] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb203c0) on tqpair=0xac9ae0 00:28:14.027 [2024-07-26 01:10:44.273307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.027 [2024-07-26 01:10:44.273316] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb20540) on tqpair=0xac9ae0 00:28:14.027 [2024-07-26 01:10:44.273324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.027 [2024-07-26 01:10:44.273332] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb206c0) on tqpair=0xac9ae0 00:28:14.027 [2024-07-26 01:10:44.273339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.027 [2024-07-26 01:10:44.273373] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.027 [2024-07-26 01:10:44.273382] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.027 [2024-07-26 01:10:44.273389] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac9ae0) 00:28:14.027 [2024-07-26 01:10:44.273400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.027 [2024-07-26 01:10:44.273438] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb206c0, cid 3, qid 0 00:28:14.027 [2024-07-26 01:10:44.273605] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.027 [2024-07-26 01:10:44.273618] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.027 [2024-07-26 01:10:44.273625] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.027 [2024-07-26 01:10:44.273632] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb206c0) on tqpair=0xac9ae0 00:28:14.027 [2024-07-26 01:10:44.273643] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.027 [2024-07-26 01:10:44.273651] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.027 [2024-07-26 01:10:44.273657] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac9ae0) 00:28:14.027 [2024-07-26 01:10:44.273667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.027 [2024-07-26 01:10:44.273693] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb206c0, cid 3, qid 0 00:28:14.027 [2024-07-26 01:10:44.273809] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.027 [2024-07-26 01:10:44.273824] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.027 [2024-07-26 01:10:44.273831] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.027 [2024-07-26 01:10:44.273838] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb206c0) on tqpair=0xac9ae0 00:28:14.027 [2024-07-26 01:10:44.273846] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:28:14.027 [2024-07-26 01:10:44.273854] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:28:14.027 [2024-07-26 01:10:44.273870] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.027 [2024-07-26 01:10:44.273880] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.027 [2024-07-26 01:10:44.273890] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac9ae0) 00:28:14.027 [2024-07-26 01:10:44.273901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.027 [2024-07-26 01:10:44.273922] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb206c0, cid 3, qid 0 00:28:14.027 [2024-07-26 01:10:44.274024] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.027 [2024-07-26 01:10:44.274052] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.027 [2024-07-26 01:10:44.274072] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.027 [2024-07-26 01:10:44.274080] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb206c0) on tqpair=0xac9ae0 00:28:14.027 [2024-07-26 01:10:44.274099] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.027 [2024-07-26 01:10:44.274116] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.027 [2024-07-26 01:10:44.274123] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac9ae0) 00:28:14.027 [2024-07-26 01:10:44.274133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.027 [2024-07-26 01:10:44.274155] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb206c0, cid 3, qid 0 00:28:14.027 [2024-07-26 01:10:44.274256] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.027 [2024-07-26 01:10:44.274272] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.027 [2024-07-26 01:10:44.274279] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.027 [2024-07-26 01:10:44.274286] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb206c0) on tqpair=0xac9ae0 00:28:14.027 [2024-07-26 01:10:44.274302] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.027 [2024-07-26 01:10:44.274312] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.028 [2024-07-26 01:10:44.274319] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac9ae0) 00:28:14.028 [2024-07-26 01:10:44.274329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.028 [2024-07-26 01:10:44.274350] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb206c0, cid 3, qid 0 00:28:14.028 [2024-07-26 01:10:44.274459] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.028 [2024-07-26 01:10:44.274474] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.028 [2024-07-26 01:10:44.274481] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.028 [2024-07-26 01:10:44.274488] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb206c0) on tqpair=0xac9ae0 00:28:14.028 [2024-07-26 01:10:44.274504] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.028 [2024-07-26 01:10:44.274513] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.028 [2024-07-26 01:10:44.274520] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac9ae0) 00:28:14.028 [2024-07-26 01:10:44.274530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.028 [2024-07-26 01:10:44.274550] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb206c0, cid 3, qid 0 00:28:14.028 [2024-07-26 01:10:44.274646] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.028 [2024-07-26 01:10:44.274662] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.028 [2024-07-26 01:10:44.274669] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.028 [2024-07-26 01:10:44.274675] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb206c0) on tqpair=0xac9ae0 00:28:14.028 [2024-07-26 01:10:44.274691] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.028 [2024-07-26 01:10:44.274701] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.028 [2024-07-26 01:10:44.274707] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac9ae0) 00:28:14.028 [2024-07-26 01:10:44.274721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.028 [2024-07-26 01:10:44.274742] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb206c0, cid 3, qid 0 00:28:14.028 [2024-07-26 01:10:44.274842] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.028 [2024-07-26 01:10:44.274855] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.028 [2024-07-26 01:10:44.274862] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.028 [2024-07-26 01:10:44.274868] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb206c0) on tqpair=0xac9ae0 00:28:14.028 [2024-07-26 01:10:44.274884] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.028 [2024-07-26 01:10:44.274893] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.028 [2024-07-26 01:10:44.274900] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac9ae0) 00:28:14.028 [2024-07-26 01:10:44.274910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.028 [2024-07-26 01:10:44.274930] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb206c0, cid 3, qid 0 00:28:14.028 [2024-07-26 01:10:44.275026] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.028 [2024-07-26 01:10:44.275056] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.028 [2024-07-26 01:10:44.275075] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.028 [2024-07-26 01:10:44.275082] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb206c0) on tqpair=0xac9ae0 00:28:14.028 [2024-07-26 01:10:44.275101] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.028 [2024-07-26 01:10:44.275111] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.028 [2024-07-26 01:10:44.275118] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac9ae0) 00:28:14.028 [2024-07-26 01:10:44.275128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.028 [2024-07-26 01:10:44.275150] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb206c0, cid 3, qid 0 00:28:14.028 [2024-07-26 01:10:44.275246] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.028 [2024-07-26 01:10:44.275261] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.028 [2024-07-26 01:10:44.275269] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.028 [2024-07-26 01:10:44.275276] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb206c0) on tqpair=0xac9ae0 00:28:14.028 [2024-07-26 01:10:44.275293] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.028 [2024-07-26 01:10:44.275302] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.028 [2024-07-26 01:10:44.275309] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac9ae0) 00:28:14.028 [2024-07-26 01:10:44.275319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.028 [2024-07-26 01:10:44.275341] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb206c0, cid 3, qid 0 00:28:14.028 [2024-07-26 01:10:44.275456] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.028 [2024-07-26 01:10:44.275469] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.028 [2024-07-26 01:10:44.275476] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.028 [2024-07-26 01:10:44.275483] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb206c0) on tqpair=0xac9ae0 00:28:14.028 [2024-07-26 01:10:44.275499] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.028 [2024-07-26 01:10:44.275508] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.028 [2024-07-26 01:10:44.275515] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac9ae0) 00:28:14.028 [2024-07-26 01:10:44.275525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.028 [2024-07-26 01:10:44.275549] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb206c0, cid 3, qid 0 00:28:14.028 [2024-07-26 01:10:44.275643] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.028 [2024-07-26 01:10:44.275658] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.028 [2024-07-26 01:10:44.275665] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.028 [2024-07-26 01:10:44.275672] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb206c0) on tqpair=0xac9ae0 00:28:14.028 [2024-07-26 01:10:44.275688] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.028 [2024-07-26 01:10:44.275697] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.028 [2024-07-26 01:10:44.275704] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac9ae0) 00:28:14.028 [2024-07-26 01:10:44.275714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.028 [2024-07-26 01:10:44.275734] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb206c0, cid 3, qid 0 00:28:14.028 [2024-07-26 01:10:44.275830] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.028 [2024-07-26 01:10:44.275843] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.028 [2024-07-26 01:10:44.275850] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.028 [2024-07-26 01:10:44.275857] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb206c0) on tqpair=0xac9ae0 00:28:14.028 [2024-07-26 01:10:44.275872] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.028 [2024-07-26 01:10:44.275882] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.028 [2024-07-26 01:10:44.275888] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac9ae0) 00:28:14.028 [2024-07-26 01:10:44.275898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.028 [2024-07-26 01:10:44.275918] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb206c0, cid 3, qid 0 00:28:14.028 [2024-07-26 01:10:44.276018] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.028 [2024-07-26 01:10:44.276030] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.028 [2024-07-26 01:10:44.276037] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.028 [2024-07-26 01:10:44.276069] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb206c0) on tqpair=0xac9ae0 00:28:14.028 [2024-07-26 01:10:44.276087] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.028 [2024-07-26 01:10:44.276097] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.028 [2024-07-26 01:10:44.276104] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac9ae0) 00:28:14.028 [2024-07-26 01:10:44.276115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.028 [2024-07-26 01:10:44.276136] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb206c0, cid 3, qid 0 00:28:14.028 [2024-07-26 01:10:44.280073] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.028 [2024-07-26 01:10:44.280092] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.028 [2024-07-26 01:10:44.280099] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.028 [2024-07-26 01:10:44.280106] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb206c0) on tqpair=0xac9ae0 00:28:14.028 [2024-07-26 01:10:44.280125] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.028 [2024-07-26 01:10:44.280135] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.028 [2024-07-26 01:10:44.280141] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac9ae0) 00:28:14.028 [2024-07-26 01:10:44.280152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.028 [2024-07-26 01:10:44.280175] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb206c0, cid 3, qid 0 00:28:14.028 [2024-07-26 01:10:44.280304] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.028 [2024-07-26 01:10:44.280320] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.028 [2024-07-26 01:10:44.280327] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.028 [2024-07-26 01:10:44.280334] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb206c0) on tqpair=0xac9ae0 00:28:14.028 [2024-07-26 01:10:44.280348] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:28:14.028 00:28:14.029 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:14.029 [2024-07-26 01:10:44.315284] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:28:14.029 [2024-07-26 01:10:44.315356] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1921575 ] 00:28:14.029 EAL: No free 2048 kB hugepages reported on node 1 00:28:14.029 [2024-07-26 01:10:44.350821] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:28:14.029 [2024-07-26 01:10:44.350867] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:14.029 [2024-07-26 01:10:44.350876] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:14.029 [2024-07-26 01:10:44.350889] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:14.029 [2024-07-26 01:10:44.350901] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:14.029 [2024-07-26 01:10:44.354094] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:28:14.029 [2024-07-26 01:10:44.354131] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1355ae0 0 00:28:14.029 [2024-07-26 01:10:44.362071] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:14.029 [2024-07-26 01:10:44.362094] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:14.029 [2024-07-26 01:10:44.362103] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:14.029 [2024-07-26 01:10:44.362110] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:14.029 [2024-07-26 01:10:44.362161] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.029 [2024-07-26 01:10:44.362174] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.029 [2024-07-26 01:10:44.362181] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1355ae0) 00:28:14.029 [2024-07-26 01:10:44.362195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:14.029 [2024-07-26 01:10:44.362222] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ac240, cid 0, qid 0 00:28:14.029 [2024-07-26 01:10:44.370076] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.029 [2024-07-26 01:10:44.370095] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.029 [2024-07-26 01:10:44.370103] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.029 [2024-07-26 01:10:44.370110] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ac240) on tqpair=0x1355ae0 00:28:14.029 [2024-07-26 01:10:44.370128] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:14.029 [2024-07-26 01:10:44.370140] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:28:14.029 [2024-07-26 01:10:44.370156] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:28:14.029 [2024-07-26 01:10:44.370175] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.029 [2024-07-26 01:10:44.370184] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.029 [2024-07-26 01:10:44.370192] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1355ae0) 00:28:14.029 [2024-07-26 01:10:44.370203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.029 [2024-07-26 01:10:44.370227] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ac240, cid 0, qid 0 00:28:14.029 [2024-07-26 01:10:44.370334] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.029 [2024-07-26 01:10:44.370348] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.029 [2024-07-26 01:10:44.370355] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.029 [2024-07-26 01:10:44.370362] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ac240) on tqpair=0x1355ae0 00:28:14.029 [2024-07-26 01:10:44.370374] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:28:14.029 [2024-07-26 01:10:44.370389] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:28:14.029 [2024-07-26 01:10:44.370402] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.029 [2024-07-26 01:10:44.370410] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.029 [2024-07-26 01:10:44.370416] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1355ae0) 00:28:14.029 [2024-07-26 01:10:44.370427] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.029 [2024-07-26 01:10:44.370449] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ac240, cid 0, qid 0 00:28:14.029 [2024-07-26 01:10:44.370555] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.029 [2024-07-26 01:10:44.370570] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.029 [2024-07-26 01:10:44.370578] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.029 [2024-07-26 01:10:44.370585] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ac240) on tqpair=0x1355ae0 00:28:14.029 [2024-07-26 01:10:44.370593] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:28:14.029 [2024-07-26 01:10:44.370608] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:28:14.029 [2024-07-26 01:10:44.370621] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.029 [2024-07-26 01:10:44.370629] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.029 [2024-07-26 01:10:44.370636] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1355ae0) 00:28:14.029 [2024-07-26 01:10:44.370647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.029 [2024-07-26 01:10:44.370669] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ac240, cid 0, qid 0 00:28:14.029 [2024-07-26 01:10:44.370775] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.029 [2024-07-26 01:10:44.370788] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.029 [2024-07-26 01:10:44.370795] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.029 [2024-07-26 01:10:44.370802] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ac240) on tqpair=0x1355ae0 00:28:14.029 [2024-07-26 01:10:44.370811] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:14.029 [2024-07-26 01:10:44.370828] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.029 [2024-07-26 01:10:44.370837] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.029 [2024-07-26 01:10:44.370848] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1355ae0) 00:28:14.029 [2024-07-26 01:10:44.370859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.029 [2024-07-26 01:10:44.370880] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ac240, cid 0, qid 0 00:28:14.029 [2024-07-26 01:10:44.370982] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.029 [2024-07-26 01:10:44.370995] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.029 [2024-07-26 01:10:44.371003] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.029 [2024-07-26 01:10:44.371010] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ac240) on tqpair=0x1355ae0 00:28:14.029 [2024-07-26 01:10:44.371017] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:28:14.029 [2024-07-26 01:10:44.371026] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:28:14.029 [2024-07-26 01:10:44.371040] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:14.029 [2024-07-26 01:10:44.371150] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:28:14.029 [2024-07-26 01:10:44.371160] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:14.029 [2024-07-26 01:10:44.371173] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.029 [2024-07-26 01:10:44.371181] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.029 [2024-07-26 01:10:44.371188] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1355ae0) 00:28:14.029 [2024-07-26 01:10:44.371198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.029 [2024-07-26 01:10:44.371220] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ac240, cid 0, qid 0 00:28:14.029 [2024-07-26 01:10:44.371326] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.029 [2024-07-26 01:10:44.371342] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.029 [2024-07-26 01:10:44.371349] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.029 [2024-07-26 01:10:44.371356] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ac240) on tqpair=0x1355ae0 00:28:14.029 [2024-07-26 01:10:44.371365] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:14.029 [2024-07-26 01:10:44.371382] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.029 [2024-07-26 01:10:44.371392] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.030 [2024-07-26 01:10:44.371399] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1355ae0) 00:28:14.030 [2024-07-26 01:10:44.371409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.030 [2024-07-26 01:10:44.371431] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ac240, cid 0, qid 0 00:28:14.030 [2024-07-26 01:10:44.371536] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.030 [2024-07-26 01:10:44.371549] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.030 [2024-07-26 01:10:44.371556] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.030 [2024-07-26 01:10:44.371563] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ac240) on tqpair=0x1355ae0 00:28:14.030 [2024-07-26 01:10:44.371570] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:14.030 [2024-07-26 01:10:44.371579] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:28:14.030 [2024-07-26 01:10:44.371596] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:28:14.030 [2024-07-26 01:10:44.371611] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:28:14.030 [2024-07-26 01:10:44.371624] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.030 [2024-07-26 01:10:44.371632] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1355ae0) 00:28:14.030 [2024-07-26 01:10:44.371642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.030 [2024-07-26 01:10:44.371664] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ac240, cid 0, qid 0 00:28:14.030 [2024-07-26 01:10:44.371812] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:14.030 [2024-07-26 01:10:44.371825] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:14.030 [2024-07-26 01:10:44.371832] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:14.030 [2024-07-26 01:10:44.371839] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1355ae0): datao=0, datal=4096, cccid=0 00:28:14.030 [2024-07-26 01:10:44.371847] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13ac240) on tqpair(0x1355ae0): expected_datao=0, payload_size=4096 00:28:14.030 [2024-07-26 01:10:44.371855] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.030 [2024-07-26 01:10:44.371872] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:14.030 [2024-07-26 01:10:44.371881] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:14.030 [2024-07-26 01:10:44.412166] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.030 [2024-07-26 01:10:44.412195] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.030 [2024-07-26 01:10:44.412203] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.030 [2024-07-26 01:10:44.412210] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ac240) on tqpair=0x1355ae0 00:28:14.030 [2024-07-26 01:10:44.412221] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:28:14.030 [2024-07-26 01:10:44.412229] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:28:14.030 [2024-07-26 01:10:44.412237] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:28:14.030 [2024-07-26 01:10:44.412244] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:28:14.030 [2024-07-26 01:10:44.412252] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:28:14.030 [2024-07-26 01:10:44.412260] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:28:14.030 [2024-07-26 01:10:44.412275] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:28:14.030 [2024-07-26 01:10:44.412292] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.030 [2024-07-26 01:10:44.412301] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.030 [2024-07-26 01:10:44.412308] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1355ae0) 00:28:14.030 [2024-07-26 01:10:44.412320] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:14.030 [2024-07-26 01:10:44.412344] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ac240, cid 0, qid 0 00:28:14.030 [2024-07-26 01:10:44.412465] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.030 [2024-07-26 01:10:44.412480] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.030 [2024-07-26 01:10:44.412487] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.030 [2024-07-26 01:10:44.412498] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ac240) on tqpair=0x1355ae0 00:28:14.030 [2024-07-26 01:10:44.412510] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.030 [2024-07-26 01:10:44.412517] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.030 [2024-07-26 01:10:44.412524] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1355ae0) 00:28:14.030 [2024-07-26 01:10:44.412534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:14.030 [2024-07-26 01:10:44.412544] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.030 [2024-07-26 01:10:44.412551] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.030 [2024-07-26 01:10:44.412558] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1355ae0) 00:28:14.030 [2024-07-26 01:10:44.412567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:14.030 [2024-07-26 01:10:44.412576] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.030 [2024-07-26 01:10:44.412583] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.030 [2024-07-26 01:10:44.412590] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1355ae0) 00:28:14.030 [2024-07-26 01:10:44.412599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:14.030 [2024-07-26 01:10:44.412608] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.030 [2024-07-26 01:10:44.412630] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.030 [2024-07-26 01:10:44.412636] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1355ae0) 00:28:14.030 [2024-07-26 01:10:44.412645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:14.030 [2024-07-26 01:10:44.412653] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:14.030 [2024-07-26 01:10:44.412672] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:14.030 [2024-07-26 01:10:44.412685] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.030 [2024-07-26 01:10:44.412693] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1355ae0) 00:28:14.030 [2024-07-26 01:10:44.412703] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.030 [2024-07-26 01:10:44.412725] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ac240, cid 0, qid 0 00:28:14.030 [2024-07-26 01:10:44.412752] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ac3c0, cid 1, qid 0 00:28:14.030 [2024-07-26 01:10:44.412760] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ac540, cid 2, qid 0 00:28:14.030 [2024-07-26 01:10:44.412767] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ac6c0, cid 3, qid 0 00:28:14.030 [2024-07-26 01:10:44.412775] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ac840, cid 4, qid 0 00:28:14.030 [2024-07-26 01:10:44.412910] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.030 [2024-07-26 01:10:44.412923] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.030 [2024-07-26 01:10:44.412930] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.030 [2024-07-26 01:10:44.412937] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ac840) on tqpair=0x1355ae0 00:28:14.030 [2024-07-26 01:10:44.412945] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:28:14.030 [2024-07-26 01:10:44.412954] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:14.030 [2024-07-26 01:10:44.412975] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:28:14.030 [2024-07-26 01:10:44.412988] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:14.030 [2024-07-26 01:10:44.412999] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.030 [2024-07-26 01:10:44.413007] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.030 [2024-07-26 01:10:44.413014] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1355ae0) 00:28:14.030 [2024-07-26 01:10:44.413025] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:14.030 [2024-07-26 01:10:44.417068] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ac840, cid 4, qid 0 00:28:14.030 [2024-07-26 01:10:44.417089] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.030 [2024-07-26 01:10:44.417100] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.030 [2024-07-26 01:10:44.417107] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.030 [2024-07-26 01:10:44.417114] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ac840) on tqpair=0x1355ae0 00:28:14.030 [2024-07-26 01:10:44.417180] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:28:14.030 [2024-07-26 01:10:44.417216] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:14.030 [2024-07-26 01:10:44.417232] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.030 [2024-07-26 01:10:44.417240] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1355ae0) 00:28:14.030 [2024-07-26 01:10:44.417251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.030 [2024-07-26 01:10:44.417274] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ac840, cid 4, qid 0 00:28:14.030 [2024-07-26 01:10:44.417406] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:14.031 [2024-07-26 01:10:44.417421] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:14.031 [2024-07-26 01:10:44.417429] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:14.031 [2024-07-26 01:10:44.417436] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1355ae0): datao=0, datal=4096, cccid=4 00:28:14.031 [2024-07-26 01:10:44.417443] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13ac840) on tqpair(0x1355ae0): expected_datao=0, payload_size=4096 00:28:14.031 [2024-07-26 01:10:44.417451] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.031 [2024-07-26 01:10:44.417462] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:14.031 [2024-07-26 01:10:44.417470] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:14.031 [2024-07-26 01:10:44.417482] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.031 [2024-07-26 01:10:44.417493] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.031 [2024-07-26 01:10:44.417500] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.031 [2024-07-26 01:10:44.417507] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ac840) on tqpair=0x1355ae0 00:28:14.031 [2024-07-26 01:10:44.417527] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:28:14.031 [2024-07-26 01:10:44.417543] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:28:14.031 [2024-07-26 01:10:44.417560] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:28:14.031 [2024-07-26 01:10:44.417574] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.031 [2024-07-26 01:10:44.417581] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1355ae0) 00:28:14.031 [2024-07-26 01:10:44.417596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.031 [2024-07-26 01:10:44.417619] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ac840, cid 4, qid 0 00:28:14.031 [2024-07-26 01:10:44.417767] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:14.031 [2024-07-26 01:10:44.417782] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:14.031 [2024-07-26 01:10:44.417789] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:14.031 [2024-07-26 01:10:44.417796] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1355ae0): datao=0, datal=4096, cccid=4 00:28:14.031 [2024-07-26 01:10:44.417804] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13ac840) on tqpair(0x1355ae0): expected_datao=0, payload_size=4096 00:28:14.031 [2024-07-26 01:10:44.417811] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.031 [2024-07-26 01:10:44.417822] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:14.031 [2024-07-26 01:10:44.417830] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:14.031 [2024-07-26 01:10:44.417842] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.031 [2024-07-26 01:10:44.417852] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.031 [2024-07-26 01:10:44.417859] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.031 [2024-07-26 01:10:44.417866] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ac840) on tqpair=0x1355ae0 00:28:14.031 [2024-07-26 01:10:44.417888] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:14.031 [2024-07-26 01:10:44.417907] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:14.031 [2024-07-26 01:10:44.417921] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.031 [2024-07-26 01:10:44.417929] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1355ae0) 00:28:14.031 [2024-07-26 01:10:44.417939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.031 [2024-07-26 01:10:44.417961] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ac840, cid 4, qid 0 00:28:14.031 [2024-07-26 01:10:44.418090] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:14.031 [2024-07-26 01:10:44.418105] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:14.031 [2024-07-26 01:10:44.418113] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:14.031 [2024-07-26 01:10:44.418119] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1355ae0): datao=0, datal=4096, cccid=4 00:28:14.031 [2024-07-26 01:10:44.418127] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13ac840) on tqpair(0x1355ae0): expected_datao=0, payload_size=4096 00:28:14.031 [2024-07-26 01:10:44.418135] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.031 [2024-07-26 01:10:44.418145] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:14.031 [2024-07-26 01:10:44.418152] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:14.031 [2024-07-26 01:10:44.418164] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.031 [2024-07-26 01:10:44.418175] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.031 [2024-07-26 01:10:44.418182] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.031 [2024-07-26 01:10:44.418188] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ac840) on tqpair=0x1355ae0 00:28:14.031 [2024-07-26 01:10:44.418204] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:14.031 [2024-07-26 01:10:44.418220] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:28:14.031 [2024-07-26 01:10:44.418238] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:28:14.031 [2024-07-26 01:10:44.418251] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:28:14.031 [2024-07-26 01:10:44.418260] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:14.031 [2024-07-26 01:10:44.418269] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:28:14.031 [2024-07-26 01:10:44.418277] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:28:14.031 [2024-07-26 01:10:44.418285] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:28:14.031 [2024-07-26 01:10:44.418294] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:28:14.031 [2024-07-26 01:10:44.418314] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.031 [2024-07-26 01:10:44.418323] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1355ae0) 00:28:14.031 [2024-07-26 01:10:44.418334] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.031 [2024-07-26 01:10:44.418346] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.031 [2024-07-26 01:10:44.418353] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.031 [2024-07-26 01:10:44.418360] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1355ae0) 00:28:14.031 [2024-07-26 01:10:44.418369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:14.031 [2024-07-26 01:10:44.418416] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ac840, cid 4, qid 0 00:28:14.031 [2024-07-26 01:10:44.418429] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ac9c0, cid 5, qid 0 00:28:14.031 [2024-07-26 01:10:44.418561] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.031 [2024-07-26 01:10:44.418574] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.031 [2024-07-26 01:10:44.418581] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.031 [2024-07-26 01:10:44.418589] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ac840) on tqpair=0x1355ae0 00:28:14.031 [2024-07-26 01:10:44.418600] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.031 [2024-07-26 01:10:44.418610] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.031 [2024-07-26 01:10:44.418617] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.031 [2024-07-26 01:10:44.418624] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ac9c0) on tqpair=0x1355ae0 00:28:14.031 [2024-07-26 01:10:44.418639] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.031 [2024-07-26 01:10:44.418649] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1355ae0) 00:28:14.031 [2024-07-26 01:10:44.418659] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.031 [2024-07-26 01:10:44.418680] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ac9c0, cid 5, qid 0 00:28:14.031 [2024-07-26 01:10:44.418793] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.031 [2024-07-26 01:10:44.418806] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.031 [2024-07-26 01:10:44.418813] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.031 [2024-07-26 01:10:44.418820] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ac9c0) on tqpair=0x1355ae0 00:28:14.031 [2024-07-26 01:10:44.418835] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.031 [2024-07-26 01:10:44.418848] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1355ae0) 00:28:14.031 [2024-07-26 01:10:44.418859] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.031 [2024-07-26 01:10:44.418880] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ac9c0, cid 5, qid 0 00:28:14.031 [2024-07-26 01:10:44.418990] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.031 [2024-07-26 01:10:44.419005] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.031 [2024-07-26 01:10:44.419012] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.031 [2024-07-26 01:10:44.419019] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ac9c0) on tqpair=0x1355ae0 00:28:14.031 [2024-07-26 01:10:44.419036] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.031 [2024-07-26 01:10:44.419045] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1355ae0) 00:28:14.031 [2024-07-26 01:10:44.419056] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.031 [2024-07-26 01:10:44.419086] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ac9c0, cid 5, qid 0 00:28:14.031 [2024-07-26 01:10:44.419220] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.031 [2024-07-26 01:10:44.419233] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.031 [2024-07-26 01:10:44.419240] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.031 [2024-07-26 01:10:44.419247] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ac9c0) on tqpair=0x1355ae0 00:28:14.032 [2024-07-26 01:10:44.419271] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.032 [2024-07-26 01:10:44.419282] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1355ae0) 00:28:14.032 [2024-07-26 01:10:44.419293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.032 [2024-07-26 01:10:44.419306] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.032 [2024-07-26 01:10:44.419314] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1355ae0) 00:28:14.032 [2024-07-26 01:10:44.419323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.032 [2024-07-26 01:10:44.419335] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.032 [2024-07-26 01:10:44.419342] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1355ae0) 00:28:14.032 [2024-07-26 01:10:44.419351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.032 [2024-07-26 01:10:44.419377] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.032 [2024-07-26 01:10:44.419385] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1355ae0) 00:28:14.032 [2024-07-26 01:10:44.419394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.032 [2024-07-26 01:10:44.419416] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ac9c0, cid 5, qid 0 00:28:14.032 [2024-07-26 01:10:44.419441] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ac840, cid 4, qid 0 00:28:14.032 [2024-07-26 01:10:44.419450] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13acb40, cid 6, qid 0 00:28:14.032 [2024-07-26 01:10:44.419458] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13accc0, cid 7, qid 0 00:28:14.032 [2024-07-26 01:10:44.419646] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:14.032 [2024-07-26 01:10:44.419663] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:14.032 [2024-07-26 01:10:44.419671] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:14.032 [2024-07-26 01:10:44.419678] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1355ae0): datao=0, datal=8192, cccid=5 00:28:14.032 [2024-07-26 01:10:44.419686] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13ac9c0) on tqpair(0x1355ae0): expected_datao=0, payload_size=8192 00:28:14.032 [2024-07-26 01:10:44.419693] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.032 [2024-07-26 01:10:44.419711] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:14.032 [2024-07-26 01:10:44.419721] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:14.032 [2024-07-26 01:10:44.419733] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:14.032 [2024-07-26 01:10:44.419744] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:14.032 [2024-07-26 01:10:44.419751] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:14.032 [2024-07-26 01:10:44.419758] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1355ae0): datao=0, datal=512, cccid=4 00:28:14.032 [2024-07-26 01:10:44.419766] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13ac840) on tqpair(0x1355ae0): expected_datao=0, payload_size=512 00:28:14.032 [2024-07-26 01:10:44.419773] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.032 [2024-07-26 01:10:44.419783] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:14.032 [2024-07-26 01:10:44.419790] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:14.032 [2024-07-26 01:10:44.419799] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:14.032 [2024-07-26 01:10:44.419808] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:14.032 [2024-07-26 01:10:44.419815] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:14.032 [2024-07-26 01:10:44.419821] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1355ae0): datao=0, datal=512, cccid=6 00:28:14.032 [2024-07-26 01:10:44.419829] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13acb40) on tqpair(0x1355ae0): expected_datao=0, payload_size=512 00:28:14.032 [2024-07-26 01:10:44.419837] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.032 [2024-07-26 01:10:44.419846] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:14.032 [2024-07-26 01:10:44.419853] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:14.032 [2024-07-26 01:10:44.419862] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:14.032 [2024-07-26 01:10:44.419871] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:14.032 [2024-07-26 01:10:44.419878] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:14.032 [2024-07-26 01:10:44.419885] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1355ae0): datao=0, datal=4096, cccid=7 00:28:14.032 [2024-07-26 01:10:44.419893] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13accc0) on tqpair(0x1355ae0): expected_datao=0, payload_size=4096 00:28:14.032 [2024-07-26 01:10:44.419900] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.032 [2024-07-26 01:10:44.419910] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:14.032 [2024-07-26 01:10:44.419917] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:14.032 [2024-07-26 01:10:44.419929] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.032 [2024-07-26 01:10:44.419939] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.032 [2024-07-26 01:10:44.419946] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.032 [2024-07-26 01:10:44.419953] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ac9c0) on tqpair=0x1355ae0 00:28:14.032 [2024-07-26 01:10:44.419973] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.032 [2024-07-26 01:10:44.420000] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.032 [2024-07-26 01:10:44.420007] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.032 [2024-07-26 01:10:44.420014] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ac840) on tqpair=0x1355ae0 00:28:14.032 [2024-07-26 01:10:44.420033] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.032 [2024-07-26 01:10:44.420066] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.032 [2024-07-26 01:10:44.420075] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.032 [2024-07-26 01:10:44.420082] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13acb40) on tqpair=0x1355ae0 00:28:14.032 [2024-07-26 01:10:44.420094] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.032 [2024-07-26 01:10:44.420103] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.032 [2024-07-26 01:10:44.420126] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.032 [2024-07-26 01:10:44.420133] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13accc0) on tqpair=0x1355ae0 00:28:14.032 ===================================================== 00:28:14.032 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:14.032 ===================================================== 00:28:14.032 Controller Capabilities/Features 00:28:14.032 ================================ 00:28:14.032 Vendor ID: 8086 00:28:14.032 Subsystem Vendor ID: 8086 00:28:14.032 Serial Number: SPDK00000000000001 00:28:14.032 Model Number: SPDK bdev Controller 00:28:14.032 Firmware Version: 24.09 00:28:14.032 Recommended Arb Burst: 6 00:28:14.032 IEEE OUI Identifier: e4 d2 5c 00:28:14.032 Multi-path I/O 00:28:14.032 May have multiple subsystem ports: Yes 00:28:14.032 May have multiple controllers: Yes 00:28:14.032 Associated with SR-IOV VF: No 00:28:14.032 Max Data Transfer Size: 131072 00:28:14.032 Max Number of Namespaces: 32 00:28:14.032 Max Number of I/O Queues: 127 00:28:14.032 NVMe Specification Version (VS): 1.3 00:28:14.032 NVMe Specification Version (Identify): 1.3 00:28:14.032 Maximum Queue Entries: 128 00:28:14.032 Contiguous Queues Required: Yes 00:28:14.032 Arbitration Mechanisms Supported 00:28:14.032 Weighted Round Robin: Not Supported 00:28:14.032 Vendor Specific: Not Supported 00:28:14.032 Reset Timeout: 15000 ms 00:28:14.032 Doorbell Stride: 4 bytes 00:28:14.032 NVM Subsystem Reset: Not Supported 00:28:14.032 Command Sets Supported 00:28:14.032 NVM Command Set: Supported 00:28:14.032 Boot Partition: Not Supported 00:28:14.032 Memory Page Size Minimum: 4096 bytes 00:28:14.032 Memory Page Size Maximum: 4096 bytes 00:28:14.032 Persistent Memory Region: Not Supported 00:28:14.032 Optional Asynchronous Events Supported 00:28:14.032 Namespace Attribute Notices: Supported 00:28:14.032 Firmware Activation Notices: Not Supported 00:28:14.032 ANA Change Notices: Not Supported 00:28:14.032 PLE Aggregate Log Change Notices: Not Supported 00:28:14.032 LBA Status Info Alert Notices: Not Supported 00:28:14.032 EGE Aggregate Log Change Notices: Not Supported 00:28:14.032 Normal NVM Subsystem Shutdown event: Not Supported 00:28:14.032 Zone Descriptor Change Notices: Not Supported 00:28:14.032 Discovery Log Change Notices: Not Supported 00:28:14.032 Controller Attributes 00:28:14.032 128-bit Host Identifier: Supported 00:28:14.032 Non-Operational Permissive Mode: Not Supported 00:28:14.032 NVM Sets: Not Supported 00:28:14.032 Read Recovery Levels: Not Supported 00:28:14.032 Endurance Groups: Not Supported 00:28:14.032 Predictable Latency Mode: Not Supported 00:28:14.032 Traffic Based Keep ALive: Not Supported 00:28:14.032 Namespace Granularity: Not Supported 00:28:14.032 SQ Associations: Not Supported 00:28:14.032 UUID List: Not Supported 00:28:14.032 Multi-Domain Subsystem: Not Supported 00:28:14.032 Fixed Capacity Management: Not Supported 00:28:14.032 Variable Capacity Management: Not Supported 00:28:14.032 Delete Endurance Group: Not Supported 00:28:14.032 Delete NVM Set: Not Supported 00:28:14.032 Extended LBA Formats Supported: Not Supported 00:28:14.032 Flexible Data Placement Supported: Not Supported 00:28:14.032 00:28:14.032 Controller Memory Buffer Support 00:28:14.032 ================================ 00:28:14.032 Supported: No 00:28:14.032 00:28:14.032 Persistent Memory Region Support 00:28:14.032 ================================ 00:28:14.032 Supported: No 00:28:14.032 00:28:14.033 Admin Command Set Attributes 00:28:14.033 ============================ 00:28:14.033 Security Send/Receive: Not Supported 00:28:14.033 Format NVM: Not Supported 00:28:14.033 Firmware Activate/Download: Not Supported 00:28:14.033 Namespace Management: Not Supported 00:28:14.033 Device Self-Test: Not Supported 00:28:14.033 Directives: Not Supported 00:28:14.033 NVMe-MI: Not Supported 00:28:14.033 Virtualization Management: Not Supported 00:28:14.033 Doorbell Buffer Config: Not Supported 00:28:14.033 Get LBA Status Capability: Not Supported 00:28:14.033 Command & Feature Lockdown Capability: Not Supported 00:28:14.033 Abort Command Limit: 4 00:28:14.033 Async Event Request Limit: 4 00:28:14.033 Number of Firmware Slots: N/A 00:28:14.033 Firmware Slot 1 Read-Only: N/A 00:28:14.033 Firmware Activation Without Reset: N/A 00:28:14.033 Multiple Update Detection Support: N/A 00:28:14.033 Firmware Update Granularity: No Information Provided 00:28:14.033 Per-Namespace SMART Log: No 00:28:14.033 Asymmetric Namespace Access Log Page: Not Supported 00:28:14.033 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:14.033 Command Effects Log Page: Supported 00:28:14.033 Get Log Page Extended Data: Supported 00:28:14.033 Telemetry Log Pages: Not Supported 00:28:14.033 Persistent Event Log Pages: Not Supported 00:28:14.033 Supported Log Pages Log Page: May Support 00:28:14.033 Commands Supported & Effects Log Page: Not Supported 00:28:14.033 Feature Identifiers & Effects Log Page:May Support 00:28:14.033 NVMe-MI Commands & Effects Log Page: May Support 00:28:14.033 Data Area 4 for Telemetry Log: Not Supported 00:28:14.033 Error Log Page Entries Supported: 128 00:28:14.033 Keep Alive: Supported 00:28:14.033 Keep Alive Granularity: 10000 ms 00:28:14.033 00:28:14.033 NVM Command Set Attributes 00:28:14.033 ========================== 00:28:14.033 Submission Queue Entry Size 00:28:14.033 Max: 64 00:28:14.033 Min: 64 00:28:14.033 Completion Queue Entry Size 00:28:14.033 Max: 16 00:28:14.033 Min: 16 00:28:14.033 Number of Namespaces: 32 00:28:14.033 Compare Command: Supported 00:28:14.033 Write Uncorrectable Command: Not Supported 00:28:14.033 Dataset Management Command: Supported 00:28:14.033 Write Zeroes Command: Supported 00:28:14.033 Set Features Save Field: Not Supported 00:28:14.033 Reservations: Supported 00:28:14.033 Timestamp: Not Supported 00:28:14.033 Copy: Supported 00:28:14.033 Volatile Write Cache: Present 00:28:14.033 Atomic Write Unit (Normal): 1 00:28:14.033 Atomic Write Unit (PFail): 1 00:28:14.033 Atomic Compare & Write Unit: 1 00:28:14.033 Fused Compare & Write: Supported 00:28:14.033 Scatter-Gather List 00:28:14.033 SGL Command Set: Supported 00:28:14.033 SGL Keyed: Supported 00:28:14.033 SGL Bit Bucket Descriptor: Not Supported 00:28:14.033 SGL Metadata Pointer: Not Supported 00:28:14.033 Oversized SGL: Not Supported 00:28:14.033 SGL Metadata Address: Not Supported 00:28:14.033 SGL Offset: Supported 00:28:14.033 Transport SGL Data Block: Not Supported 00:28:14.033 Replay Protected Memory Block: Not Supported 00:28:14.033 00:28:14.033 Firmware Slot Information 00:28:14.033 ========================= 00:28:14.033 Active slot: 1 00:28:14.033 Slot 1 Firmware Revision: 24.09 00:28:14.033 00:28:14.033 00:28:14.033 Commands Supported and Effects 00:28:14.033 ============================== 00:28:14.033 Admin Commands 00:28:14.033 -------------- 00:28:14.033 Get Log Page (02h): Supported 00:28:14.033 Identify (06h): Supported 00:28:14.033 Abort (08h): Supported 00:28:14.033 Set Features (09h): Supported 00:28:14.033 Get Features (0Ah): Supported 00:28:14.033 Asynchronous Event Request (0Ch): Supported 00:28:14.033 Keep Alive (18h): Supported 00:28:14.033 I/O Commands 00:28:14.033 ------------ 00:28:14.033 Flush (00h): Supported LBA-Change 00:28:14.033 Write (01h): Supported LBA-Change 00:28:14.033 Read (02h): Supported 00:28:14.033 Compare (05h): Supported 00:28:14.033 Write Zeroes (08h): Supported LBA-Change 00:28:14.033 Dataset Management (09h): Supported LBA-Change 00:28:14.033 Copy (19h): Supported LBA-Change 00:28:14.033 00:28:14.033 Error Log 00:28:14.033 ========= 00:28:14.033 00:28:14.033 Arbitration 00:28:14.033 =========== 00:28:14.033 Arbitration Burst: 1 00:28:14.033 00:28:14.033 Power Management 00:28:14.033 ================ 00:28:14.033 Number of Power States: 1 00:28:14.033 Current Power State: Power State #0 00:28:14.033 Power State #0: 00:28:14.033 Max Power: 0.00 W 00:28:14.033 Non-Operational State: Operational 00:28:14.033 Entry Latency: Not Reported 00:28:14.033 Exit Latency: Not Reported 00:28:14.033 Relative Read Throughput: 0 00:28:14.033 Relative Read Latency: 0 00:28:14.033 Relative Write Throughput: 0 00:28:14.033 Relative Write Latency: 0 00:28:14.033 Idle Power: Not Reported 00:28:14.033 Active Power: Not Reported 00:28:14.033 Non-Operational Permissive Mode: Not Supported 00:28:14.033 00:28:14.033 Health Information 00:28:14.033 ================== 00:28:14.033 Critical Warnings: 00:28:14.033 Available Spare Space: OK 00:28:14.033 Temperature: OK 00:28:14.033 Device Reliability: OK 00:28:14.033 Read Only: No 00:28:14.033 Volatile Memory Backup: OK 00:28:14.033 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:14.033 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:28:14.033 Available Spare: 0% 00:28:14.033 Available Spare Threshold: 0% 00:28:14.033 Life Percentage Used:[2024-07-26 01:10:44.420270] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.033 [2024-07-26 01:10:44.420282] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1355ae0) 00:28:14.033 [2024-07-26 01:10:44.420294] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.033 [2024-07-26 01:10:44.420317] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13accc0, cid 7, qid 0 00:28:14.033 [2024-07-26 01:10:44.420439] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.033 [2024-07-26 01:10:44.420452] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.033 [2024-07-26 01:10:44.420459] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.033 [2024-07-26 01:10:44.420466] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13accc0) on tqpair=0x1355ae0 00:28:14.033 [2024-07-26 01:10:44.420512] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:28:14.033 [2024-07-26 01:10:44.420531] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ac240) on tqpair=0x1355ae0 00:28:14.033 [2024-07-26 01:10:44.420543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.033 [2024-07-26 01:10:44.420552] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ac3c0) on tqpair=0x1355ae0 00:28:14.033 [2024-07-26 01:10:44.420560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.033 [2024-07-26 01:10:44.420568] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ac540) on tqpair=0x1355ae0 00:28:14.033 [2024-07-26 01:10:44.420576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.033 [2024-07-26 01:10:44.420584] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ac6c0) on tqpair=0x1355ae0 00:28:14.033 [2024-07-26 01:10:44.420592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.033 [2024-07-26 01:10:44.420605] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.033 [2024-07-26 01:10:44.420613] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.034 [2024-07-26 01:10:44.420635] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1355ae0) 00:28:14.034 [2024-07-26 01:10:44.420646] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.034 [2024-07-26 01:10:44.420668] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ac6c0, cid 3, qid 0 00:28:14.034 [2024-07-26 01:10:44.420795] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.034 [2024-07-26 01:10:44.420808] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.034 [2024-07-26 01:10:44.420816] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.034 [2024-07-26 01:10:44.420823] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ac6c0) on tqpair=0x1355ae0 00:28:14.034 [2024-07-26 01:10:44.420834] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.034 [2024-07-26 01:10:44.420846] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.034 [2024-07-26 01:10:44.420853] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1355ae0) 00:28:14.034 [2024-07-26 01:10:44.420864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.034 [2024-07-26 01:10:44.420890] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ac6c0, cid 3, qid 0 00:28:14.034 [2024-07-26 01:10:44.424071] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.034 [2024-07-26 01:10:44.424089] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.034 [2024-07-26 01:10:44.424096] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.034 [2024-07-26 01:10:44.424103] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ac6c0) on tqpair=0x1355ae0 00:28:14.034 [2024-07-26 01:10:44.424111] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:28:14.034 [2024-07-26 01:10:44.424119] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:28:14.034 [2024-07-26 01:10:44.424150] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:14.034 [2024-07-26 01:10:44.424160] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:14.034 [2024-07-26 01:10:44.424167] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1355ae0) 00:28:14.034 [2024-07-26 01:10:44.424178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.034 [2024-07-26 01:10:44.424201] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ac6c0, cid 3, qid 0 00:28:14.034 [2024-07-26 01:10:44.424319] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:14.034 [2024-07-26 01:10:44.424334] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:14.034 [2024-07-26 01:10:44.424342] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:14.034 [2024-07-26 01:10:44.424349] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ac6c0) on tqpair=0x1355ae0 00:28:14.034 [2024-07-26 01:10:44.424362] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 0 milliseconds 00:28:14.034 0% 00:28:14.034 Data Units Read: 0 00:28:14.034 Data Units Written: 0 00:28:14.034 Host Read Commands: 0 00:28:14.034 Host Write Commands: 0 00:28:14.034 Controller Busy Time: 0 minutes 00:28:14.034 Power Cycles: 0 00:28:14.034 Power On Hours: 0 hours 00:28:14.034 Unsafe Shutdowns: 0 00:28:14.034 Unrecoverable Media Errors: 0 00:28:14.034 Lifetime Error Log Entries: 0 00:28:14.034 Warning Temperature Time: 0 minutes 00:28:14.034 Critical Temperature Time: 0 minutes 00:28:14.034 00:28:14.034 Number of Queues 00:28:14.034 ================ 00:28:14.034 Number of I/O Submission Queues: 127 00:28:14.034 Number of I/O Completion Queues: 127 00:28:14.034 00:28:14.034 Active Namespaces 00:28:14.034 ================= 00:28:14.034 Namespace ID:1 00:28:14.034 Error Recovery Timeout: Unlimited 00:28:14.034 Command Set Identifier: NVM (00h) 00:28:14.034 Deallocate: Supported 00:28:14.034 Deallocated/Unwritten Error: Not Supported 00:28:14.034 Deallocated Read Value: Unknown 00:28:14.034 Deallocate in Write Zeroes: Not Supported 00:28:14.034 Deallocated Guard Field: 0xFFFF 00:28:14.034 Flush: Supported 00:28:14.034 Reservation: Supported 00:28:14.034 Namespace Sharing Capabilities: Multiple Controllers 00:28:14.034 Size (in LBAs): 131072 (0GiB) 00:28:14.034 Capacity (in LBAs): 131072 (0GiB) 00:28:14.034 Utilization (in LBAs): 131072 (0GiB) 00:28:14.034 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:14.034 EUI64: ABCDEF0123456789 00:28:14.034 UUID: 0ace8ee5-952b-49dc-9854-d527c3b4e758 00:28:14.034 Thin Provisioning: Not Supported 00:28:14.034 Per-NS Atomic Units: Yes 00:28:14.034 Atomic Boundary Size (Normal): 0 00:28:14.034 Atomic Boundary Size (PFail): 0 00:28:14.034 Atomic Boundary Offset: 0 00:28:14.034 Maximum Single Source Range Length: 65535 00:28:14.034 Maximum Copy Length: 65535 00:28:14.034 Maximum Source Range Count: 1 00:28:14.034 NGUID/EUI64 Never Reused: No 00:28:14.034 Namespace Write Protected: No 00:28:14.034 Number of LBA Formats: 1 00:28:14.034 Current LBA Format: LBA Format #00 00:28:14.034 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:14.034 00:28:14.034 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:28:14.034 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:14.034 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.034 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:14.291 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.292 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:14.292 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:28:14.292 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:14.292 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:28:14.292 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:14.292 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:28:14.292 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:14.292 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:14.292 rmmod nvme_tcp 00:28:14.292 rmmod nvme_fabrics 00:28:14.292 rmmod nvme_keyring 00:28:14.292 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:14.292 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:28:14.292 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:28:14.292 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1921428 ']' 00:28:14.292 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1921428 00:28:14.292 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 1921428 ']' 00:28:14.292 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 1921428 00:28:14.292 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:28:14.292 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:14.292 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1921428 00:28:14.292 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:14.292 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:14.292 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1921428' 00:28:14.292 killing process with pid 1921428 00:28:14.292 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 1921428 00:28:14.292 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 1921428 00:28:14.550 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:14.550 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:14.550 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:14.550 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:14.550 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:14.550 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.550 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:14.550 01:10:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:16.450 01:10:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:16.450 00:28:16.450 real 0m5.244s 00:28:16.450 user 0m4.386s 00:28:16.450 sys 0m1.734s 00:28:16.450 01:10:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:16.450 01:10:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:16.450 ************************************ 00:28:16.450 END TEST nvmf_identify 00:28:16.450 ************************************ 00:28:16.450 01:10:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:16.450 01:10:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:16.450 01:10:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:16.450 01:10:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.450 ************************************ 00:28:16.450 START TEST nvmf_perf 00:28:16.450 ************************************ 00:28:16.707 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:16.707 * Looking for test storage... 00:28:16.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:16.707 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:16.707 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:28:16.707 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:16.707 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:16.707 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:16.707 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:16.707 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:16.707 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:16.708 01:10:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:18.610 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:18.610 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:18.610 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:18.610 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:18.610 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:18.610 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:18.610 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:28:18.610 00:28:18.610 --- 10.0.0.2 ping statistics --- 00:28:18.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:18.610 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:28:18.611 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:18.611 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:18.611 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:28:18.611 00:28:18.611 --- 10.0.0.1 ping statistics --- 00:28:18.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:18.611 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:28:18.611 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:18.611 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:28:18.611 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:18.611 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:18.611 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:18.611 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:18.611 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:18.611 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:18.611 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:18.611 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:18.611 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:18.611 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:18.611 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:18.611 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1923502 00:28:18.611 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:18.611 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1923502 00:28:18.611 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 1923502 ']' 00:28:18.611 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:18.611 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:18.611 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:18.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:18.611 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:18.611 01:10:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:18.611 [2024-07-26 01:10:49.026770] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:28:18.611 [2024-07-26 01:10:49.026844] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:18.869 EAL: No free 2048 kB hugepages reported on node 1 00:28:18.869 [2024-07-26 01:10:49.093139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:18.869 [2024-07-26 01:10:49.179283] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:18.869 [2024-07-26 01:10:49.179344] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:18.869 [2024-07-26 01:10:49.179373] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:18.869 [2024-07-26 01:10:49.179385] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:18.869 [2024-07-26 01:10:49.179395] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:18.870 [2024-07-26 01:10:49.179438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:18.870 [2024-07-26 01:10:49.179487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:18.870 [2024-07-26 01:10:49.179544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:18.870 [2024-07-26 01:10:49.179547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:18.870 01:10:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:18.870 01:10:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:28:18.870 01:10:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:18.870 01:10:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:18.870 01:10:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:19.127 01:10:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:19.127 01:10:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:19.127 01:10:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:22.452 01:10:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:22.452 01:10:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:22.452 01:10:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:28:22.452 01:10:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:22.710 01:10:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:22.710 01:10:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:28:22.710 01:10:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:22.710 01:10:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:22.710 01:10:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:22.710 [2024-07-26 01:10:53.121910] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:22.968 01:10:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:23.225 01:10:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:23.225 01:10:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:23.225 01:10:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:23.225 01:10:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:23.482 01:10:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:23.739 [2024-07-26 01:10:54.121550] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:23.739 01:10:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:23.995 01:10:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:28:23.995 01:10:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:23.995 01:10:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:23.995 01:10:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:25.365 Initializing NVMe Controllers 00:28:25.365 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:28:25.365 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:28:25.365 Initialization complete. Launching workers. 00:28:25.365 ======================================================== 00:28:25.365 Latency(us) 00:28:25.365 Device Information : IOPS MiB/s Average min max 00:28:25.365 PCIE (0000:88:00.0) NSID 1 from core 0: 85157.89 332.65 375.30 11.16 6259.14 00:28:25.365 ======================================================== 00:28:25.365 Total : 85157.89 332.65 375.30 11.16 6259.14 00:28:25.365 00:28:25.365 01:10:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:25.365 EAL: No free 2048 kB hugepages reported on node 1 00:28:26.735 Initializing NVMe Controllers 00:28:26.735 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:26.735 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:26.735 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:26.735 Initialization complete. Launching workers. 00:28:26.735 ======================================================== 00:28:26.735 Latency(us) 00:28:26.735 Device Information : IOPS MiB/s Average min max 00:28:26.735 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 151.00 0.59 6877.74 186.51 46068.32 00:28:26.735 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 49.00 0.19 20988.20 6950.08 47895.00 00:28:26.735 ======================================================== 00:28:26.735 Total : 200.00 0.78 10334.80 186.51 47895.00 00:28:26.735 00:28:26.735 01:10:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:26.735 EAL: No free 2048 kB hugepages reported on node 1 00:28:27.667 Initializing NVMe Controllers 00:28:27.667 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:27.667 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:27.667 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:27.667 Initialization complete. Launching workers. 00:28:27.667 ======================================================== 00:28:27.667 Latency(us) 00:28:27.667 Device Information : IOPS MiB/s Average min max 00:28:27.667 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8359.99 32.66 3829.81 590.61 7743.67 00:28:27.667 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3904.00 15.25 8240.10 6756.22 16127.41 00:28:27.667 ======================================================== 00:28:27.667 Total : 12263.99 47.91 5233.74 590.61 16127.41 00:28:27.667 00:28:27.924 01:10:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:27.924 01:10:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:27.924 01:10:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:27.924 EAL: No free 2048 kB hugepages reported on node 1 00:28:30.458 Initializing NVMe Controllers 00:28:30.458 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:30.458 Controller IO queue size 128, less than required. 00:28:30.458 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:30.458 Controller IO queue size 128, less than required. 00:28:30.458 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:30.458 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:30.458 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:30.458 Initialization complete. Launching workers. 00:28:30.458 ======================================================== 00:28:30.458 Latency(us) 00:28:30.458 Device Information : IOPS MiB/s Average min max 00:28:30.458 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1500.49 375.12 87160.53 54492.18 122493.50 00:28:30.458 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 610.50 152.62 217810.95 79056.98 335123.00 00:28:30.458 ======================================================== 00:28:30.458 Total : 2110.99 527.75 124944.55 54492.18 335123.00 00:28:30.458 00:28:30.458 01:11:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:30.458 EAL: No free 2048 kB hugepages reported on node 1 00:28:30.458 No valid NVMe controllers or AIO or URING devices found 00:28:30.458 Initializing NVMe Controllers 00:28:30.458 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:30.458 Controller IO queue size 128, less than required. 00:28:30.458 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:30.458 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:30.458 Controller IO queue size 128, less than required. 00:28:30.458 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:30.458 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:30.458 WARNING: Some requested NVMe devices were skipped 00:28:30.458 01:11:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:30.458 EAL: No free 2048 kB hugepages reported on node 1 00:28:32.986 Initializing NVMe Controllers 00:28:32.986 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:32.986 Controller IO queue size 128, less than required. 00:28:32.986 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:32.986 Controller IO queue size 128, less than required. 00:28:32.986 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:32.986 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:32.986 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:32.986 Initialization complete. Launching workers. 00:28:32.986 00:28:32.986 ==================== 00:28:32.986 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:32.986 TCP transport: 00:28:32.986 polls: 15044 00:28:32.986 idle_polls: 5815 00:28:32.986 sock_completions: 9229 00:28:32.986 nvme_completions: 6335 00:28:32.986 submitted_requests: 9608 00:28:32.986 queued_requests: 1 00:28:32.986 00:28:32.986 ==================== 00:28:32.987 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:32.987 TCP transport: 00:28:32.987 polls: 15650 00:28:32.987 idle_polls: 9590 00:28:32.987 sock_completions: 6060 00:28:32.987 nvme_completions: 3443 00:28:32.987 submitted_requests: 5174 00:28:32.987 queued_requests: 1 00:28:32.987 ======================================================== 00:28:32.987 Latency(us) 00:28:32.987 Device Information : IOPS MiB/s Average min max 00:28:32.987 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1583.25 395.81 82572.61 47466.67 136383.38 00:28:32.987 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 860.37 215.09 151526.03 65462.55 238611.68 00:28:32.987 ======================================================== 00:28:32.987 Total : 2443.62 610.91 106850.19 47466.67 238611.68 00:28:32.987 00:28:32.987 01:11:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:32.987 01:11:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:33.243 01:11:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:33.243 01:11:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:28:33.243 01:11:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:36.518 01:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=4a853e88-69b1-4e33-a310-d3718b70f366 00:28:36.518 01:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 4a853e88-69b1-4e33-a310-d3718b70f366 00:28:36.518 01:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=4a853e88-69b1-4e33-a310-d3718b70f366 00:28:36.518 01:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:28:36.518 01:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:28:36.518 01:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:28:36.518 01:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:37.082 01:11:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:28:37.082 { 00:28:37.082 "uuid": "4a853e88-69b1-4e33-a310-d3718b70f366", 00:28:37.082 "name": "lvs_0", 00:28:37.082 "base_bdev": "Nvme0n1", 00:28:37.082 "total_data_clusters": 238234, 00:28:37.082 "free_clusters": 238234, 00:28:37.082 "block_size": 512, 00:28:37.082 "cluster_size": 4194304 00:28:37.082 } 00:28:37.082 ]' 00:28:37.082 01:11:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="4a853e88-69b1-4e33-a310-d3718b70f366") .free_clusters' 00:28:37.082 01:11:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:28:37.082 01:11:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="4a853e88-69b1-4e33-a310-d3718b70f366") .cluster_size' 00:28:37.082 01:11:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:28:37.082 01:11:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:28:37.082 01:11:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:28:37.082 952936 00:28:37.082 01:11:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:28:37.082 01:11:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:28:37.082 01:11:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4a853e88-69b1-4e33-a310-d3718b70f366 lbd_0 20480 00:28:37.339 01:11:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=041a46a4-d591-431e-958a-6445970e3f94 00:28:37.339 01:11:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 041a46a4-d591-431e-958a-6445970e3f94 lvs_n_0 00:28:38.271 01:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=60d19b4d-e466-479b-910b-b9d4845925f2 00:28:38.271 01:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 60d19b4d-e466-479b-910b-b9d4845925f2 00:28:38.271 01:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=60d19b4d-e466-479b-910b-b9d4845925f2 00:28:38.271 01:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:28:38.271 01:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:28:38.271 01:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:28:38.271 01:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:38.528 01:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:28:38.528 { 00:28:38.528 "uuid": "4a853e88-69b1-4e33-a310-d3718b70f366", 00:28:38.528 "name": "lvs_0", 00:28:38.528 "base_bdev": "Nvme0n1", 00:28:38.528 "total_data_clusters": 238234, 00:28:38.528 "free_clusters": 233114, 00:28:38.528 "block_size": 512, 00:28:38.528 "cluster_size": 4194304 00:28:38.528 }, 00:28:38.528 { 00:28:38.528 "uuid": "60d19b4d-e466-479b-910b-b9d4845925f2", 00:28:38.528 "name": "lvs_n_0", 00:28:38.529 "base_bdev": "041a46a4-d591-431e-958a-6445970e3f94", 00:28:38.529 "total_data_clusters": 5114, 00:28:38.529 "free_clusters": 5114, 00:28:38.529 "block_size": 512, 00:28:38.529 "cluster_size": 4194304 00:28:38.529 } 00:28:38.529 ]' 00:28:38.529 01:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="60d19b4d-e466-479b-910b-b9d4845925f2") .free_clusters' 00:28:38.529 01:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:28:38.529 01:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="60d19b4d-e466-479b-910b-b9d4845925f2") .cluster_size' 00:28:38.529 01:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:28:38.529 01:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:28:38.529 01:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:28:38.529 20456 00:28:38.529 01:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:38.529 01:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 60d19b4d-e466-479b-910b-b9d4845925f2 lbd_nest_0 20456 00:28:38.786 01:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=b0e217ee-2b8c-4838-9ad3-8e61fea4e751 00:28:38.786 01:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:39.043 01:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:39.043 01:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 b0e217ee-2b8c-4838-9ad3-8e61fea4e751 00:28:39.300 01:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:39.557 01:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:39.557 01:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:39.557 01:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:39.557 01:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:39.557 01:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:39.557 EAL: No free 2048 kB hugepages reported on node 1 00:28:51.753 Initializing NVMe Controllers 00:28:51.753 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:51.753 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:51.753 Initialization complete. Launching workers. 00:28:51.753 ======================================================== 00:28:51.753 Latency(us) 00:28:51.753 Device Information : IOPS MiB/s Average min max 00:28:51.753 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 46.80 0.02 21370.08 188.26 45832.12 00:28:51.753 ======================================================== 00:28:51.753 Total : 46.80 0.02 21370.08 188.26 45832.12 00:28:51.753 00:28:51.753 01:11:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:51.753 01:11:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:51.753 EAL: No free 2048 kB hugepages reported on node 1 00:29:01.728 Initializing NVMe Controllers 00:29:01.728 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:01.728 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:01.728 Initialization complete. Launching workers. 00:29:01.728 ======================================================== 00:29:01.728 Latency(us) 00:29:01.728 Device Information : IOPS MiB/s Average min max 00:29:01.728 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 80.00 10.00 12518.79 5001.39 48853.03 00:29:01.728 ======================================================== 00:29:01.728 Total : 80.00 10.00 12518.79 5001.39 48853.03 00:29:01.728 00:29:01.728 01:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:01.728 01:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:01.728 01:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:01.728 EAL: No free 2048 kB hugepages reported on node 1 00:29:11.735 Initializing NVMe Controllers 00:29:11.735 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:11.735 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:11.735 Initialization complete. Launching workers. 00:29:11.735 ======================================================== 00:29:11.735 Latency(us) 00:29:11.735 Device Information : IOPS MiB/s Average min max 00:29:11.735 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7108.16 3.47 4501.44 284.59 12147.92 00:29:11.735 ======================================================== 00:29:11.735 Total : 7108.16 3.47 4501.44 284.59 12147.92 00:29:11.735 00:29:11.735 01:11:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:11.735 01:11:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:11.735 EAL: No free 2048 kB hugepages reported on node 1 00:29:21.711 Initializing NVMe Controllers 00:29:21.711 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:21.711 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:21.711 Initialization complete. Launching workers. 00:29:21.711 ======================================================== 00:29:21.711 Latency(us) 00:29:21.711 Device Information : IOPS MiB/s Average min max 00:29:21.711 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3152.51 394.06 10157.10 810.69 24997.36 00:29:21.711 ======================================================== 00:29:21.711 Total : 3152.51 394.06 10157.10 810.69 24997.36 00:29:21.711 00:29:21.711 01:11:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:21.711 01:11:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:21.711 01:11:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:21.711 EAL: No free 2048 kB hugepages reported on node 1 00:29:31.678 Initializing NVMe Controllers 00:29:31.678 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:31.678 Controller IO queue size 128, less than required. 00:29:31.678 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:31.678 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:31.678 Initialization complete. Launching workers. 00:29:31.678 ======================================================== 00:29:31.678 Latency(us) 00:29:31.678 Device Information : IOPS MiB/s Average min max 00:29:31.678 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11836.60 5.78 10822.23 1849.97 29896.58 00:29:31.678 ======================================================== 00:29:31.678 Total : 11836.60 5.78 10822.23 1849.97 29896.58 00:29:31.678 00:29:31.678 01:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:31.678 01:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:31.678 EAL: No free 2048 kB hugepages reported on node 1 00:29:41.666 Initializing NVMe Controllers 00:29:41.666 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:41.666 Controller IO queue size 128, less than required. 00:29:41.666 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:41.666 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:41.666 Initialization complete. Launching workers. 00:29:41.666 ======================================================== 00:29:41.666 Latency(us) 00:29:41.666 Device Information : IOPS MiB/s Average min max 00:29:41.666 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1220.50 152.56 105515.27 22931.98 218542.86 00:29:41.666 ======================================================== 00:29:41.666 Total : 1220.50 152.56 105515.27 22931.98 218542.86 00:29:41.666 00:29:41.666 01:12:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:41.924 01:12:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b0e217ee-2b8c-4838-9ad3-8e61fea4e751 00:29:42.858 01:12:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:42.858 01:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 041a46a4-d591-431e-958a-6445970e3f94 00:29:43.122 01:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:43.378 01:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:43.378 01:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:29:43.378 01:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:43.378 01:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:29:43.378 01:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:43.378 01:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:29:43.378 01:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:43.378 01:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:43.378 rmmod nvme_tcp 00:29:43.378 rmmod nvme_fabrics 00:29:43.378 rmmod nvme_keyring 00:29:43.378 01:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:43.378 01:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:29:43.378 01:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:29:43.378 01:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1923502 ']' 00:29:43.378 01:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1923502 00:29:43.378 01:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 1923502 ']' 00:29:43.378 01:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 1923502 00:29:43.378 01:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:29:43.378 01:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:43.378 01:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1923502 00:29:43.635 01:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:43.635 01:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:43.635 01:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1923502' 00:29:43.635 killing process with pid 1923502 00:29:43.635 01:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 1923502 00:29:43.635 01:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 1923502 00:29:45.011 01:12:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:45.011 01:12:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:45.011 01:12:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:45.011 01:12:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:45.011 01:12:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:45.011 01:12:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.011 01:12:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:45.011 01:12:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:47.578 00:29:47.578 real 1m30.562s 00:29:47.578 user 5m32.590s 00:29:47.578 sys 0m16.239s 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:47.578 ************************************ 00:29:47.578 END TEST nvmf_perf 00:29:47.578 ************************************ 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.578 ************************************ 00:29:47.578 START TEST nvmf_fio_host 00:29:47.578 ************************************ 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:47.578 * Looking for test storage... 00:29:47.578 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:47.578 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:47.579 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:47.579 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:47.579 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.579 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:47.579 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:47.579 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:47.579 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:47.579 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:29:47.579 01:12:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.485 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:49.485 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:29:49.485 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:49.485 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:49.486 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:49.486 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:49.486 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:49.486 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:49.486 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:49.486 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:29:49.486 00:29:49.486 --- 10.0.0.2 ping statistics --- 00:29:49.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:49.486 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:49.486 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:49.486 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:29:49.486 00:29:49.486 --- 10.0.0.1 ping statistics --- 00:29:49.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:49.486 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:49.486 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:49.487 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:49.487 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:29:49.487 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:29:49.487 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:49.487 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.487 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1935966 00:29:49.487 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:49.487 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:49.487 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1935966 00:29:49.487 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 1935966 ']' 00:29:49.487 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:49.487 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:49.487 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:49.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:49.487 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:49.487 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.487 [2024-07-26 01:12:19.626757] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:29:49.487 [2024-07-26 01:12:19.626832] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:49.487 EAL: No free 2048 kB hugepages reported on node 1 00:29:49.487 [2024-07-26 01:12:19.696133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:49.487 [2024-07-26 01:12:19.789830] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:49.487 [2024-07-26 01:12:19.789894] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:49.487 [2024-07-26 01:12:19.789910] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:49.487 [2024-07-26 01:12:19.789923] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:49.487 [2024-07-26 01:12:19.789936] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:49.487 [2024-07-26 01:12:19.790030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:49.487 [2024-07-26 01:12:19.790116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:49.487 [2024-07-26 01:12:19.790201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:49.487 [2024-07-26 01:12:19.790204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.745 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:49.745 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:29:49.745 01:12:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:49.745 [2024-07-26 01:12:20.149979] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:50.003 01:12:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:29:50.003 01:12:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:50.003 01:12:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.003 01:12:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:29:50.261 Malloc1 00:29:50.261 01:12:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:50.519 01:12:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:50.778 01:12:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:50.778 [2024-07-26 01:12:21.180573] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:50.778 01:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:51.037 01:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:51.037 01:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:51.037 01:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:51.037 01:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:51.037 01:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:51.037 01:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:51.037 01:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:51.037 01:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:51.037 01:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:51.037 01:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:51.037 01:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:51.037 01:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:51.037 01:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:51.037 01:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:51.037 01:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:51.037 01:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:51.037 01:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:51.295 01:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:51.295 01:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:51.295 01:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:51.295 01:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:51.295 01:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:51.295 01:12:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:51.295 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:51.295 fio-3.35 00:29:51.295 Starting 1 thread 00:29:51.295 EAL: No free 2048 kB hugepages reported on node 1 00:29:53.819 00:29:53.819 test: (groupid=0, jobs=1): err= 0: pid=1936325: Fri Jul 26 01:12:23 2024 00:29:53.819 read: IOPS=7774, BW=30.4MiB/s (31.8MB/s)(61.0MiB/2007msec) 00:29:53.819 slat (nsec): min=1989, max=119637, avg=2647.22, stdev=1629.78 00:29:53.819 clat (usec): min=2950, max=15304, avg=9034.15, stdev=749.54 00:29:53.819 lat (usec): min=2977, max=15306, avg=9036.80, stdev=749.45 00:29:53.819 clat percentiles (usec): 00:29:53.819 | 1.00th=[ 7308], 5.00th=[ 7832], 10.00th=[ 8094], 20.00th=[ 8455], 00:29:53.819 | 30.00th=[ 8717], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9241], 00:29:53.819 | 70.00th=[ 9372], 80.00th=[ 9634], 90.00th=[ 9896], 95.00th=[10159], 00:29:53.819 | 99.00th=[10683], 99.50th=[10814], 99.90th=[12911], 99.95th=[13960], 00:29:53.819 | 99.99th=[15139] 00:29:53.819 bw ( KiB/s): min=29880, max=31784, per=99.89%, avg=31064.00, stdev=826.55, samples=4 00:29:53.819 iops : min= 7470, max= 7946, avg=7766.00, stdev=206.64, samples=4 00:29:53.819 write: IOPS=7757, BW=30.3MiB/s (31.8MB/s)(60.8MiB/2007msec); 0 zone resets 00:29:53.819 slat (usec): min=2, max=109, avg= 2.79, stdev= 1.36 00:29:53.819 clat (usec): min=1363, max=13898, avg=7377.36, stdev=617.88 00:29:53.819 lat (usec): min=1369, max=13901, avg=7380.15, stdev=617.86 00:29:53.819 clat percentiles (usec): 00:29:53.819 | 1.00th=[ 5932], 5.00th=[ 6456], 10.00th=[ 6652], 20.00th=[ 6915], 00:29:53.820 | 30.00th=[ 7111], 40.00th=[ 7242], 50.00th=[ 7373], 60.00th=[ 7504], 00:29:53.820 | 70.00th=[ 7701], 80.00th=[ 7832], 90.00th=[ 8094], 95.00th=[ 8291], 00:29:53.820 | 99.00th=[ 8717], 99.50th=[ 8848], 99.90th=[11338], 99.95th=[11863], 00:29:53.820 | 99.99th=[13173] 00:29:53.820 bw ( KiB/s): min=30832, max=31272, per=100.00%, avg=31030.00, stdev=185.60, samples=4 00:29:53.820 iops : min= 7708, max= 7818, avg=7757.50, stdev=46.40, samples=4 00:29:53.820 lat (msec) : 2=0.01%, 4=0.10%, 10=95.74%, 20=4.16% 00:29:53.820 cpu : usr=59.07%, sys=36.89%, ctx=35, majf=0, minf=33 00:29:53.820 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:53.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:53.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:53.820 issued rwts: total=15604,15570,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:53.820 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:53.820 00:29:53.820 Run status group 0 (all jobs): 00:29:53.820 READ: bw=30.4MiB/s (31.8MB/s), 30.4MiB/s-30.4MiB/s (31.8MB/s-31.8MB/s), io=61.0MiB (63.9MB), run=2007-2007msec 00:29:53.820 WRITE: bw=30.3MiB/s (31.8MB/s), 30.3MiB/s-30.3MiB/s (31.8MB/s-31.8MB/s), io=60.8MiB (63.8MB), run=2007-2007msec 00:29:53.820 01:12:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:53.820 01:12:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:53.820 01:12:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:53.820 01:12:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:53.820 01:12:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:53.820 01:12:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:53.820 01:12:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:53.820 01:12:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:53.820 01:12:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:53.820 01:12:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:53.820 01:12:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:53.820 01:12:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:53.820 01:12:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:53.820 01:12:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:53.820 01:12:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:53.820 01:12:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:53.820 01:12:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:53.820 01:12:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:53.820 01:12:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:53.820 01:12:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:53.820 01:12:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:53.820 01:12:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:53.820 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:29:53.820 fio-3.35 00:29:53.820 Starting 1 thread 00:29:53.820 EAL: No free 2048 kB hugepages reported on node 1 00:29:56.346 00:29:56.346 test: (groupid=0, jobs=1): err= 0: pid=1936791: Fri Jul 26 01:12:26 2024 00:29:56.346 read: IOPS=8435, BW=132MiB/s (138MB/s)(265MiB/2009msec) 00:29:56.346 slat (nsec): min=2927, max=97455, avg=3863.39, stdev=1957.88 00:29:56.346 clat (usec): min=2547, max=16609, avg=8856.79, stdev=2075.35 00:29:56.346 lat (usec): min=2550, max=16613, avg=8860.66, stdev=2075.39 00:29:56.346 clat percentiles (usec): 00:29:56.346 | 1.00th=[ 4686], 5.00th=[ 5669], 10.00th=[ 6259], 20.00th=[ 7046], 00:29:56.346 | 30.00th=[ 7701], 40.00th=[ 8160], 50.00th=[ 8717], 60.00th=[ 9372], 00:29:56.346 | 70.00th=[10028], 80.00th=[10552], 90.00th=[11469], 95.00th=[12387], 00:29:56.346 | 99.00th=[14484], 99.50th=[15008], 99.90th=[16188], 99.95th=[16450], 00:29:56.346 | 99.99th=[16581] 00:29:56.346 bw ( KiB/s): min=60128, max=76768, per=51.40%, avg=69368.00, stdev=7769.93, samples=4 00:29:56.346 iops : min= 3758, max= 4798, avg=4335.50, stdev=485.62, samples=4 00:29:56.346 write: IOPS=4916, BW=76.8MiB/s (80.5MB/s)(141MiB/1837msec); 0 zone resets 00:29:56.346 slat (usec): min=30, max=162, avg=34.35, stdev= 5.92 00:29:56.346 clat (usec): min=5374, max=17583, avg=11180.59, stdev=1936.96 00:29:56.346 lat (usec): min=5405, max=17615, avg=11214.94, stdev=1936.85 00:29:56.346 clat percentiles (usec): 00:29:56.346 | 1.00th=[ 7242], 5.00th=[ 8356], 10.00th=[ 8979], 20.00th=[ 9634], 00:29:56.346 | 30.00th=[10028], 40.00th=[10421], 50.00th=[10945], 60.00th=[11469], 00:29:56.346 | 70.00th=[11994], 80.00th=[12780], 90.00th=[13960], 95.00th=[14746], 00:29:56.346 | 99.00th=[16450], 99.50th=[16909], 99.90th=[17433], 99.95th=[17433], 00:29:56.346 | 99.99th=[17695] 00:29:56.346 bw ( KiB/s): min=62880, max=79872, per=91.67%, avg=72104.00, stdev=7899.37, samples=4 00:29:56.346 iops : min= 3930, max= 4992, avg=4506.50, stdev=493.71, samples=4 00:29:56.346 lat (msec) : 4=0.17%, 10=56.02%, 20=43.81% 00:29:56.346 cpu : usr=76.00%, sys=21.86%, ctx=36, majf=0, minf=51 00:29:56.346 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:29:56.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.346 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:56.346 issued rwts: total=16947,9031,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.346 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:56.346 00:29:56.346 Run status group 0 (all jobs): 00:29:56.346 READ: bw=132MiB/s (138MB/s), 132MiB/s-132MiB/s (138MB/s-138MB/s), io=265MiB (278MB), run=2009-2009msec 00:29:56.346 WRITE: bw=76.8MiB/s (80.5MB/s), 76.8MiB/s-76.8MiB/s (80.5MB/s-80.5MB/s), io=141MiB (148MB), run=1837-1837msec 00:29:56.346 01:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:56.604 01:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:29:56.604 01:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:29:56.604 01:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:29:56.604 01:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:29:56.604 01:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:29:56.604 01:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:56.604 01:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:56.604 01:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:29:56.604 01:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:29:56.604 01:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:29:56.604 01:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:29:59.888 Nvme0n1 00:29:59.888 01:12:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:30:03.173 01:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=f8d70f22-cc7f-4639-b543-fa1b8c2375ab 00:30:03.173 01:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb f8d70f22-cc7f-4639-b543-fa1b8c2375ab 00:30:03.173 01:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=f8d70f22-cc7f-4639-b543-fa1b8c2375ab 00:30:03.173 01:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:03.173 01:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:30:03.173 01:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:30:03.173 01:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:03.173 01:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:03.173 { 00:30:03.173 "uuid": "f8d70f22-cc7f-4639-b543-fa1b8c2375ab", 00:30:03.173 "name": "lvs_0", 00:30:03.173 "base_bdev": "Nvme0n1", 00:30:03.173 "total_data_clusters": 930, 00:30:03.173 "free_clusters": 930, 00:30:03.173 "block_size": 512, 00:30:03.173 "cluster_size": 1073741824 00:30:03.173 } 00:30:03.173 ]' 00:30:03.173 01:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="f8d70f22-cc7f-4639-b543-fa1b8c2375ab") .free_clusters' 00:30:03.173 01:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:30:03.173 01:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="f8d70f22-cc7f-4639-b543-fa1b8c2375ab") .cluster_size' 00:30:03.173 01:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:30:03.173 01:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:30:03.173 01:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:30:03.173 952320 00:30:03.173 01:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:30:03.430 73fe2603-c5cb-4a94-b5ff-90c4b44be51e 00:30:03.430 01:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:30:03.687 01:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:30:03.946 01:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:04.205 01:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:04.205 01:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:04.205 01:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:04.205 01:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:04.205 01:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:04.205 01:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:04.205 01:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:04.205 01:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:04.205 01:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:04.205 01:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:04.205 01:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:04.205 01:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:04.205 01:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:04.205 01:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:04.205 01:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:04.205 01:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:04.205 01:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:04.205 01:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:04.205 01:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:04.205 01:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:04.205 01:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:04.205 01:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:04.205 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:04.205 fio-3.35 00:30:04.205 Starting 1 thread 00:30:04.464 EAL: No free 2048 kB hugepages reported on node 1 00:30:06.993 00:30:06.993 test: (groupid=0, jobs=1): err= 0: pid=1938071: Fri Jul 26 01:12:36 2024 00:30:06.993 read: IOPS=6000, BW=23.4MiB/s (24.6MB/s)(47.1MiB/2008msec) 00:30:06.993 slat (usec): min=2, max=176, avg= 2.74, stdev= 2.57 00:30:06.993 clat (usec): min=884, max=171404, avg=11685.98, stdev=11648.17 00:30:06.993 lat (usec): min=888, max=171446, avg=11688.72, stdev=11648.56 00:30:06.993 clat percentiles (msec): 00:30:06.993 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:30:06.993 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 12], 00:30:06.993 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 13], 00:30:06.993 | 99.00th=[ 14], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:30:06.993 | 99.99th=[ 171] 00:30:06.993 bw ( KiB/s): min=16776, max=26512, per=99.78%, avg=23952.00, stdev=4787.00, samples=4 00:30:06.993 iops : min= 4194, max= 6628, avg=5988.00, stdev=1196.75, samples=4 00:30:06.993 write: IOPS=5981, BW=23.4MiB/s (24.5MB/s)(46.9MiB/2008msec); 0 zone resets 00:30:06.993 slat (usec): min=2, max=148, avg= 2.84, stdev= 1.98 00:30:06.993 clat (usec): min=339, max=169318, avg=9482.26, stdev=10935.62 00:30:06.993 lat (usec): min=342, max=169326, avg=9485.10, stdev=10936.02 00:30:06.993 clat percentiles (msec): 00:30:06.993 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:30:06.993 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:30:06.993 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 11], 00:30:06.993 | 99.00th=[ 11], 99.50th=[ 16], 99.90th=[ 169], 99.95th=[ 169], 00:30:06.993 | 99.99th=[ 169] 00:30:06.993 bw ( KiB/s): min=17712, max=26040, per=99.96%, avg=23916.00, stdev=4136.35, samples=4 00:30:06.993 iops : min= 4428, max= 6510, avg=5979.00, stdev=1034.09, samples=4 00:30:06.993 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:30:06.994 lat (msec) : 2=0.02%, 4=0.13%, 10=56.09%, 20=43.20%, 250=0.53% 00:30:06.994 cpu : usr=59.34%, sys=37.72%, ctx=90, majf=0, minf=33 00:30:06.994 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:06.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.994 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:06.994 issued rwts: total=12050,12011,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.994 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:06.994 00:30:06.994 Run status group 0 (all jobs): 00:30:06.994 READ: bw=23.4MiB/s (24.6MB/s), 23.4MiB/s-23.4MiB/s (24.6MB/s-24.6MB/s), io=47.1MiB (49.4MB), run=2008-2008msec 00:30:06.994 WRITE: bw=23.4MiB/s (24.5MB/s), 23.4MiB/s-23.4MiB/s (24.5MB/s-24.5MB/s), io=46.9MiB (49.2MB), run=2008-2008msec 00:30:06.994 01:12:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:06.994 01:12:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:08.371 01:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=8389b9e8-7df4-4835-adb1-698ba32a691f 00:30:08.371 01:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 8389b9e8-7df4-4835-adb1-698ba32a691f 00:30:08.371 01:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=8389b9e8-7df4-4835-adb1-698ba32a691f 00:30:08.371 01:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:08.371 01:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:30:08.371 01:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:30:08.371 01:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:08.371 01:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:08.371 { 00:30:08.371 "uuid": "f8d70f22-cc7f-4639-b543-fa1b8c2375ab", 00:30:08.371 "name": "lvs_0", 00:30:08.371 "base_bdev": "Nvme0n1", 00:30:08.371 "total_data_clusters": 930, 00:30:08.371 "free_clusters": 0, 00:30:08.371 "block_size": 512, 00:30:08.371 "cluster_size": 1073741824 00:30:08.371 }, 00:30:08.371 { 00:30:08.371 "uuid": "8389b9e8-7df4-4835-adb1-698ba32a691f", 00:30:08.371 "name": "lvs_n_0", 00:30:08.371 "base_bdev": "73fe2603-c5cb-4a94-b5ff-90c4b44be51e", 00:30:08.371 "total_data_clusters": 237847, 00:30:08.371 "free_clusters": 237847, 00:30:08.371 "block_size": 512, 00:30:08.371 "cluster_size": 4194304 00:30:08.371 } 00:30:08.371 ]' 00:30:08.371 01:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="8389b9e8-7df4-4835-adb1-698ba32a691f") .free_clusters' 00:30:08.371 01:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:30:08.371 01:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="8389b9e8-7df4-4835-adb1-698ba32a691f") .cluster_size' 00:30:08.371 01:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:30:08.371 01:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:30:08.371 01:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:30:08.371 951388 00:30:08.371 01:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:30:09.307 09419c3a-cd87-412e-9bc8-94beba839fbf 00:30:09.307 01:12:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:09.307 01:12:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:09.565 01:12:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:09.823 01:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:09.823 01:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:09.823 01:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:09.823 01:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:09.823 01:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:09.823 01:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:09.823 01:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:09.823 01:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:09.823 01:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:09.823 01:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:09.823 01:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:09.824 01:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:09.824 01:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:09.824 01:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:09.824 01:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:09.824 01:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:09.824 01:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:09.824 01:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:09.824 01:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:09.824 01:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:09.824 01:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:09.824 01:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:10.083 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:10.083 fio-3.35 00:30:10.083 Starting 1 thread 00:30:10.083 EAL: No free 2048 kB hugepages reported on node 1 00:30:12.643 00:30:12.643 test: (groupid=0, jobs=1): err= 0: pid=1938808: Fri Jul 26 01:12:42 2024 00:30:12.643 read: IOPS=5784, BW=22.6MiB/s (23.7MB/s)(45.4MiB/2009msec) 00:30:12.643 slat (nsec): min=1990, max=164253, avg=2727.88, stdev=2455.93 00:30:12.643 clat (usec): min=4599, max=20310, avg=12187.41, stdev=1065.01 00:30:12.643 lat (usec): min=4605, max=20313, avg=12190.14, stdev=1064.87 00:30:12.643 clat percentiles (usec): 00:30:12.643 | 1.00th=[ 9634], 5.00th=[10552], 10.00th=[10945], 20.00th=[11338], 00:30:12.643 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12256], 60.00th=[12387], 00:30:12.643 | 70.00th=[12649], 80.00th=[13042], 90.00th=[13435], 95.00th=[13829], 00:30:12.643 | 99.00th=[14615], 99.50th=[14746], 99.90th=[18220], 99.95th=[20055], 00:30:12.643 | 99.99th=[20317] 00:30:12.643 bw ( KiB/s): min=21624, max=23880, per=99.77%, avg=23086.00, stdev=1000.85, samples=4 00:30:12.643 iops : min= 5406, max= 5970, avg=5771.50, stdev=250.21, samples=4 00:30:12.643 write: IOPS=5764, BW=22.5MiB/s (23.6MB/s)(45.2MiB/2009msec); 0 zone resets 00:30:12.643 slat (usec): min=2, max=135, avg= 2.84, stdev= 1.89 00:30:12.643 clat (usec): min=2246, max=18171, avg=9781.32, stdev=900.86 00:30:12.643 lat (usec): min=2254, max=18174, avg=9784.16, stdev=900.79 00:30:12.643 clat percentiles (usec): 00:30:12.643 | 1.00th=[ 7701], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 9110], 00:30:12.643 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[10028], 00:30:12.643 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10814], 95.00th=[11076], 00:30:12.643 | 99.00th=[11731], 99.50th=[12125], 99.90th=[15139], 99.95th=[17957], 00:30:12.643 | 99.99th=[18220] 00:30:12.643 bw ( KiB/s): min=22744, max=23232, per=100.00%, avg=23064.00, stdev=219.67, samples=4 00:30:12.643 iops : min= 5686, max= 5808, avg=5766.00, stdev=54.92, samples=4 00:30:12.643 lat (msec) : 4=0.05%, 10=31.40%, 20=68.53%, 50=0.03% 00:30:12.643 cpu : usr=57.12%, sys=40.19%, ctx=113, majf=0, minf=33 00:30:12.643 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:12.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:12.643 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:12.643 issued rwts: total=11622,11580,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:12.643 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:12.643 00:30:12.643 Run status group 0 (all jobs): 00:30:12.643 READ: bw=22.6MiB/s (23.7MB/s), 22.6MiB/s-22.6MiB/s (23.7MB/s-23.7MB/s), io=45.4MiB (47.6MB), run=2009-2009msec 00:30:12.643 WRITE: bw=22.5MiB/s (23.6MB/s), 22.5MiB/s-22.5MiB/s (23.6MB/s-23.6MB/s), io=45.2MiB (47.4MB), run=2009-2009msec 00:30:12.643 01:12:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:12.901 01:12:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:30:12.902 01:12:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:17.087 01:12:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:17.087 01:12:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:30:20.371 01:12:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:20.371 01:12:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:30:22.271 01:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:22.271 01:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:22.271 01:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:30:22.271 01:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:22.271 01:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:30:22.271 01:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:22.271 01:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:30:22.271 01:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:22.271 01:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:22.271 rmmod nvme_tcp 00:30:22.271 rmmod nvme_fabrics 00:30:22.271 rmmod nvme_keyring 00:30:22.271 01:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:22.271 01:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:30:22.271 01:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:30:22.271 01:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1935966 ']' 00:30:22.271 01:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1935966 00:30:22.271 01:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 1935966 ']' 00:30:22.271 01:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 1935966 00:30:22.271 01:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:30:22.271 01:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:22.271 01:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1935966 00:30:22.271 01:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:22.271 01:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:22.271 01:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1935966' 00:30:22.271 killing process with pid 1935966 00:30:22.271 01:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 1935966 00:30:22.271 01:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 1935966 00:30:22.271 01:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:22.271 01:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:22.271 01:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:22.271 01:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:22.271 01:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:22.271 01:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:22.271 01:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:22.271 01:12:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:24.804 01:12:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:24.804 00:30:24.804 real 0m37.162s 00:30:24.804 user 2m22.634s 00:30:24.804 sys 0m7.123s 00:30:24.804 01:12:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:24.804 01:12:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.804 ************************************ 00:30:24.804 END TEST nvmf_fio_host 00:30:24.804 ************************************ 00:30:24.804 01:12:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:24.804 01:12:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:24.804 01:12:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:24.804 01:12:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.804 ************************************ 00:30:24.804 START TEST nvmf_failover 00:30:24.804 ************************************ 00:30:24.804 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:24.804 * Looking for test storage... 00:30:24.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:24.804 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:24.804 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:30:24.804 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:24.804 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:24.804 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:24.804 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:24.805 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:24.806 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:24.806 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:24.806 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:30:24.806 01:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:26.709 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:26.709 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:26.709 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:26.709 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:26.709 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:26.710 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:26.710 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:26.710 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:26.710 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:26.710 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:26.710 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:26.710 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:26.710 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:26.710 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:26.710 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:26.710 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:26.710 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:26.710 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:26.710 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:26.710 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:30:26.710 00:30:26.710 --- 10.0.0.2 ping statistics --- 00:30:26.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.710 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:30:26.710 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:26.710 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:26.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:30:26.710 00:30:26.710 --- 10.0.0.1 ping statistics --- 00:30:26.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.710 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:30:26.710 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:26.710 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:30:26.710 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:26.710 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:26.710 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:26.710 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:26.710 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:26.710 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:26.710 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:26.710 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:26.710 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:26.710 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:26.710 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:26.710 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:26.710 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1942128 00:30:26.710 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1942128 00:30:26.710 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1942128 ']' 00:30:26.710 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:26.710 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:26.710 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:26.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:26.710 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:26.710 01:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:26.710 [2024-07-26 01:12:56.856021] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:30:26.710 [2024-07-26 01:12:56.856137] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:26.710 EAL: No free 2048 kB hugepages reported on node 1 00:30:26.710 [2024-07-26 01:12:56.920490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:26.710 [2024-07-26 01:12:57.011305] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:26.710 [2024-07-26 01:12:57.011380] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:26.710 [2024-07-26 01:12:57.011409] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:26.710 [2024-07-26 01:12:57.011421] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:26.710 [2024-07-26 01:12:57.011431] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:26.710 [2024-07-26 01:12:57.011488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:26.710 [2024-07-26 01:12:57.011548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:26.710 [2024-07-26 01:12:57.011551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:26.710 01:12:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:26.710 01:12:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:30:26.710 01:12:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:26.710 01:12:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:26.710 01:12:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:26.968 01:12:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:26.968 01:12:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:27.225 [2024-07-26 01:12:57.426272] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:27.225 01:12:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:27.482 Malloc0 00:30:27.482 01:12:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:27.739 01:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:27.996 01:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:28.253 [2024-07-26 01:12:58.578621] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:28.253 01:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:28.511 [2024-07-26 01:12:58.827324] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:28.511 01:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:28.769 [2024-07-26 01:12:59.092313] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:28.769 01:12:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1942455 00:30:28.769 01:12:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:28.769 01:12:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:28.769 01:12:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1942455 /var/tmp/bdevperf.sock 00:30:28.769 01:12:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1942455 ']' 00:30:28.769 01:12:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:28.769 01:12:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:28.769 01:12:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:28.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:28.769 01:12:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:28.769 01:12:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:29.027 01:12:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:29.027 01:12:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:30:29.027 01:12:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:29.594 NVMe0n1 00:30:29.594 01:12:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:29.852 00:30:29.852 01:13:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1942537 00:30:29.852 01:13:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:29.852 01:13:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:30:30.787 01:13:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:31.045 [2024-07-26 01:13:01.431902] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7290 is same with the state(5) to be set 00:30:31.045 01:13:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:30:34.333 01:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:34.591 00:30:34.591 01:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:34.877 [2024-07-26 01:13:05.110915] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f80d0 is same with the state(5) to be set 00:30:34.877 [2024-07-26 01:13:05.111001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f80d0 is same with the state(5) to be set 00:30:34.877 [2024-07-26 01:13:05.111017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f80d0 is same with the state(5) to be set 00:30:34.877 [2024-07-26 01:13:05.111044] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f80d0 is same with the state(5) to be set 00:30:34.877 [2024-07-26 01:13:05.111055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f80d0 is same with the state(5) to be set 00:30:34.878 [2024-07-26 01:13:05.111076] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f80d0 is same with the state(5) to be set 00:30:34.878 [2024-07-26 01:13:05.111103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f80d0 is same with the state(5) to be set 00:30:34.878 [2024-07-26 01:13:05.111115] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f80d0 is same with the state(5) to be set 00:30:34.878 [2024-07-26 01:13:05.111127] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f80d0 is same with the state(5) to be set 00:30:34.878 [2024-07-26 01:13:05.111147] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f80d0 is same with the state(5) to be set 00:30:34.878 [2024-07-26 01:13:05.111159] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f80d0 is same with the state(5) to be set 00:30:34.878 [2024-07-26 01:13:05.111170] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f80d0 is same with the state(5) to be set 00:30:34.878 [2024-07-26 01:13:05.111181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f80d0 is same with the state(5) to be set 00:30:34.878 [2024-07-26 01:13:05.111193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f80d0 is same with the state(5) to be set 00:30:34.878 [2024-07-26 01:13:05.111204] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f80d0 is same with the state(5) to be set 00:30:34.878 [2024-07-26 01:13:05.111216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f80d0 is same with the state(5) to be set 00:30:34.878 [2024-07-26 01:13:05.111227] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f80d0 is same with the state(5) to be set 00:30:34.878 [2024-07-26 01:13:05.111238] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f80d0 is same with the state(5) to be set 00:30:34.878 [2024-07-26 01:13:05.111250] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f80d0 is same with the state(5) to be set 00:30:34.878 01:13:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:30:38.168 01:13:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:38.168 [2024-07-26 01:13:08.410287] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:38.168 01:13:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:30:39.102 01:13:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:39.362 [2024-07-26 01:13:09.666392] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9460 is same with the state(5) to be set 00:30:39.362 [2024-07-26 01:13:09.666460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9460 is same with the state(5) to be set 00:30:39.362 [2024-07-26 01:13:09.666476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9460 is same with the state(5) to be set 00:30:39.362 [2024-07-26 01:13:09.666488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9460 is same with the state(5) to be set 00:30:39.362 [2024-07-26 01:13:09.666500] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9460 is same with the state(5) to be set 00:30:39.362 [2024-07-26 01:13:09.666512] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9460 is same with the state(5) to be set 00:30:39.362 [2024-07-26 01:13:09.666524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9460 is same with the state(5) to be set 00:30:39.362 [2024-07-26 01:13:09.666536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9460 is same with the state(5) to be set 00:30:39.362 [2024-07-26 01:13:09.666548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9460 is same with the state(5) to be set 00:30:39.362 [2024-07-26 01:13:09.666560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9460 is same with the state(5) to be set 00:30:39.362 [2024-07-26 01:13:09.666587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9460 is same with the state(5) to be set 00:30:39.362 [2024-07-26 01:13:09.666598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9460 is same with the state(5) to be set 00:30:39.362 [2024-07-26 01:13:09.666611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9460 is same with the state(5) to be set 00:30:39.362 [2024-07-26 01:13:09.666652] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9460 is same with the state(5) to be set 00:30:39.362 [2024-07-26 01:13:09.666664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9460 is same with the state(5) to be set 00:30:39.362 [2024-07-26 01:13:09.666675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9460 is same with the state(5) to be set 00:30:39.362 [2024-07-26 01:13:09.666687] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9460 is same with the state(5) to be set 00:30:39.362 [2024-07-26 01:13:09.666697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9460 is same with the state(5) to be set 00:30:39.362 [2024-07-26 01:13:09.666708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9460 is same with the state(5) to be set 00:30:39.362 [2024-07-26 01:13:09.666719] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9460 is same with the state(5) to be set 00:30:39.362 [2024-07-26 01:13:09.666730] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9460 is same with the state(5) to be set 00:30:39.362 [2024-07-26 01:13:09.666741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9460 is same with the state(5) to be set 00:30:39.362 [2024-07-26 01:13:09.666752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9460 is same with the state(5) to be set 00:30:39.362 [2024-07-26 01:13:09.666763] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9460 is same with the state(5) to be set 00:30:39.362 [2024-07-26 01:13:09.666774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9460 is same with the state(5) to be set 00:30:39.362 [2024-07-26 01:13:09.666785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9460 is same with the state(5) to be set 00:30:39.362 [2024-07-26 01:13:09.666796] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9460 is same with the state(5) to be set 00:30:39.362 [2024-07-26 01:13:09.666806] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9460 is same with the state(5) to be set 00:30:39.362 [2024-07-26 01:13:09.666817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9460 is same with the state(5) to be set 00:30:39.362 [2024-07-26 01:13:09.666828] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9460 is same with the state(5) to be set 00:30:39.362 [2024-07-26 01:13:09.666839] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9460 is same with the state(5) to be set 00:30:39.362 [2024-07-26 01:13:09.666850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9460 is same with the state(5) to be set 00:30:39.362 [2024-07-26 01:13:09.666861] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9460 is same with the state(5) to be set 00:30:39.362 [2024-07-26 01:13:09.666871] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9460 is same with the state(5) to be set 00:30:39.362 [2024-07-26 01:13:09.666883] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9460 is same with the state(5) to be set 00:30:39.362 [2024-07-26 01:13:09.666895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9460 is same with the state(5) to be set 00:30:39.362 [2024-07-26 01:13:09.666905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9460 is same with the state(5) to be set 00:30:39.362 01:13:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1942537 00:30:45.928 0 00:30:45.928 01:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1942455 00:30:45.928 01:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1942455 ']' 00:30:45.928 01:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1942455 00:30:45.928 01:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:30:45.928 01:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:45.928 01:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1942455 00:30:45.928 01:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:45.928 01:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:45.928 01:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1942455' 00:30:45.928 killing process with pid 1942455 00:30:45.928 01:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1942455 00:30:45.928 01:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1942455 00:30:45.928 01:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:45.928 [2024-07-26 01:12:59.155719] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:30:45.929 [2024-07-26 01:12:59.155800] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1942455 ] 00:30:45.929 EAL: No free 2048 kB hugepages reported on node 1 00:30:45.929 [2024-07-26 01:12:59.216181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.929 [2024-07-26 01:12:59.303789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:45.929 Running I/O for 15 seconds... 00:30:45.929 [2024-07-26 01:13:01.432488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.929 [2024-07-26 01:13:01.432531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.929 [2024-07-26 01:13:01.432559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.929 [2024-07-26 01:13:01.432575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.929 [2024-07-26 01:13:01.432592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.929 [2024-07-26 01:13:01.432607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.929 [2024-07-26 01:13:01.432623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.929 [2024-07-26 01:13:01.432637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.929 [2024-07-26 01:13:01.432652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.929 [2024-07-26 01:13:01.432666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.929 [2024-07-26 01:13:01.432682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.929 [2024-07-26 01:13:01.432696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.929 [2024-07-26 01:13:01.432726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.929 [2024-07-26 01:13:01.432739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.929 [2024-07-26 01:13:01.432753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.929 [2024-07-26 01:13:01.432766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.929 [2024-07-26 01:13:01.432782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.929 [2024-07-26 01:13:01.432795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.929 [2024-07-26 01:13:01.432809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.929 [2024-07-26 01:13:01.432822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.929 [2024-07-26 01:13:01.432837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.929 [2024-07-26 01:13:01.432850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.929 [2024-07-26 01:13:01.432875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.929 [2024-07-26 01:13:01.432889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.929 [2024-07-26 01:13:01.432903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.929 [2024-07-26 01:13:01.432916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.929 [2024-07-26 01:13:01.432931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.929 [2024-07-26 01:13:01.432944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.929 [2024-07-26 01:13:01.432958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.929 [2024-07-26 01:13:01.432971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.929 [2024-07-26 01:13:01.432985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.929 [2024-07-26 01:13:01.432998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.929 [2024-07-26 01:13:01.433012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.929 [2024-07-26 01:13:01.433025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.929 [2024-07-26 01:13:01.433056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.929 [2024-07-26 01:13:01.433079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.929 [2024-07-26 01:13:01.433095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.929 [2024-07-26 01:13:01.433124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.929 [2024-07-26 01:13:01.433140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.929 [2024-07-26 01:13:01.433159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.929 [2024-07-26 01:13:01.433175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.929 [2024-07-26 01:13:01.433188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.929 [2024-07-26 01:13:01.433203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.929 [2024-07-26 01:13:01.433217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.929 [2024-07-26 01:13:01.433232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.929 [2024-07-26 01:13:01.433246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.929 [2024-07-26 01:13:01.433261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.929 [2024-07-26 01:13:01.433279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.929 [2024-07-26 01:13:01.433294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.929 [2024-07-26 01:13:01.433308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.929 [2024-07-26 01:13:01.433324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.929 [2024-07-26 01:13:01.433337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.929 [2024-07-26 01:13:01.433352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.929 [2024-07-26 01:13:01.433385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.929 [2024-07-26 01:13:01.433400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.929 [2024-07-26 01:13:01.433412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.929 [2024-07-26 01:13:01.433443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.929 [2024-07-26 01:13:01.433456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.929 [2024-07-26 01:13:01.433472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:76816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.929 [2024-07-26 01:13:01.433484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.929 [2024-07-26 01:13:01.433498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.929 [2024-07-26 01:13:01.433511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.929 [2024-07-26 01:13:01.433526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.929 [2024-07-26 01:13:01.433538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.929 [2024-07-26 01:13:01.433553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.929 [2024-07-26 01:13:01.433565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.929 [2024-07-26 01:13:01.433580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.929 [2024-07-26 01:13:01.433593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.929 [2024-07-26 01:13:01.433607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.929 [2024-07-26 01:13:01.433619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.929 [2024-07-26 01:13:01.433634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.929 [2024-07-26 01:13:01.433646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.929 [2024-07-26 01:13:01.433665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.929 [2024-07-26 01:13:01.433679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.929 [2024-07-26 01:13:01.433692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.930 [2024-07-26 01:13:01.433705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.930 [2024-07-26 01:13:01.433719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.930 [2024-07-26 01:13:01.433732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.930 [2024-07-26 01:13:01.433746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.930 [2024-07-26 01:13:01.433759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.930 [2024-07-26 01:13:01.433773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.930 [2024-07-26 01:13:01.433785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.930 [2024-07-26 01:13:01.433800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:75912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.930 [2024-07-26 01:13:01.433812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.930 [2024-07-26 01:13:01.433826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.930 [2024-07-26 01:13:01.433839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.930 [2024-07-26 01:13:01.433853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.930 [2024-07-26 01:13:01.433865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.930 [2024-07-26 01:13:01.433879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.930 [2024-07-26 01:13:01.433892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.930 [2024-07-26 01:13:01.433906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.930 [2024-07-26 01:13:01.433918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.930 [2024-07-26 01:13:01.433933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.930 [2024-07-26 01:13:01.433945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.930 [2024-07-26 01:13:01.433959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.930 [2024-07-26 01:13:01.433971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.930 [2024-07-26 01:13:01.433985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.930 [2024-07-26 01:13:01.434001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.930 [2024-07-26 01:13:01.434016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:75976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.930 [2024-07-26 01:13:01.434029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.930 [2024-07-26 01:13:01.434043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:75984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.930 [2024-07-26 01:13:01.434056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.930 [2024-07-26 01:13:01.434092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.930 [2024-07-26 01:13:01.434106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.930 [2024-07-26 01:13:01.434121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:76000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.930 [2024-07-26 01:13:01.434134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.930 [2024-07-26 01:13:01.434148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.930 [2024-07-26 01:13:01.434161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.930 [2024-07-26 01:13:01.434176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:76008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.930 [2024-07-26 01:13:01.434189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.930 [2024-07-26 01:13:01.434203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:76016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.930 [2024-07-26 01:13:01.434216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.930 [2024-07-26 01:13:01.434231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.930 [2024-07-26 01:13:01.434244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.930 [2024-07-26 01:13:01.434258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:76032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.930 [2024-07-26 01:13:01.434271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.930 [2024-07-26 01:13:01.434285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:76040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.930 [2024-07-26 01:13:01.434299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.930 [2024-07-26 01:13:01.434313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:76048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.930 [2024-07-26 01:13:01.434326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.930 [2024-07-26 01:13:01.434340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:76056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.930 [2024-07-26 01:13:01.434353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.930 [2024-07-26 01:13:01.434383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:76064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.930 [2024-07-26 01:13:01.434399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.930 [2024-07-26 01:13:01.434414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:76072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.930 [2024-07-26 01:13:01.434427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.930 [2024-07-26 01:13:01.434441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.930 [2024-07-26 01:13:01.434454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.930 [2024-07-26 01:13:01.434468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.930 [2024-07-26 01:13:01.434481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.930 [2024-07-26 01:13:01.434495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:76096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.930 [2024-07-26 01:13:01.434508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.930 [2024-07-26 01:13:01.434522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:76104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.930 [2024-07-26 01:13:01.434534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.930 [2024-07-26 01:13:01.434548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:76112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.930 [2024-07-26 01:13:01.434561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.930 [2024-07-26 01:13:01.434575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:76120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.930 [2024-07-26 01:13:01.434588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.930 [2024-07-26 01:13:01.434602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:76128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.930 [2024-07-26 01:13:01.434614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.930 [2024-07-26 01:13:01.434629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:76136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.930 [2024-07-26 01:13:01.434641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.930 [2024-07-26 01:13:01.434655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:76144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.930 [2024-07-26 01:13:01.434667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.930 [2024-07-26 01:13:01.434682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:76152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.930 [2024-07-26 01:13:01.434695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.930 [2024-07-26 01:13:01.434709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:76160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.930 [2024-07-26 01:13:01.434721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.930 [2024-07-26 01:13:01.434738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:76168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.930 [2024-07-26 01:13:01.434752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.930 [2024-07-26 01:13:01.434766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:76176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.930 [2024-07-26 01:13:01.434779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.930 [2024-07-26 01:13:01.434793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:76184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.930 [2024-07-26 01:13:01.434806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.931 [2024-07-26 01:13:01.434820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:76192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.931 [2024-07-26 01:13:01.434833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.931 [2024-07-26 01:13:01.434847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:76200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.931 [2024-07-26 01:13:01.434859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.931 [2024-07-26 01:13:01.434873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:76208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.931 [2024-07-26 01:13:01.434885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.931 [2024-07-26 01:13:01.434900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:76216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.931 [2024-07-26 01:13:01.434912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.931 [2024-07-26 01:13:01.434927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:76224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.931 [2024-07-26 01:13:01.434939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.931 [2024-07-26 01:13:01.434953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:76232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.931 [2024-07-26 01:13:01.434965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.931 [2024-07-26 01:13:01.434980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:76240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.931 [2024-07-26 01:13:01.434992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.931 [2024-07-26 01:13:01.435006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:76248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.931 [2024-07-26 01:13:01.435018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.931 [2024-07-26 01:13:01.435032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:76256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.931 [2024-07-26 01:13:01.435044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.931 [2024-07-26 01:13:01.435077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:76264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.931 [2024-07-26 01:13:01.435097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.931 [2024-07-26 01:13:01.435113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:76272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.931 [2024-07-26 01:13:01.435126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.931 [2024-07-26 01:13:01.435141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:76280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.931 [2024-07-26 01:13:01.435154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.931 [2024-07-26 01:13:01.435168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:76288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.931 [2024-07-26 01:13:01.435181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.931 [2024-07-26 01:13:01.435195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.931 [2024-07-26 01:13:01.435209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.931 [2024-07-26 01:13:01.435223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:76304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.931 [2024-07-26 01:13:01.435236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.931 [2024-07-26 01:13:01.435250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:76312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.931 [2024-07-26 01:13:01.435263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.931 [2024-07-26 01:13:01.435277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.931 [2024-07-26 01:13:01.435291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.931 [2024-07-26 01:13:01.435305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:76328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.931 [2024-07-26 01:13:01.435318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.931 [2024-07-26 01:13:01.435332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.931 [2024-07-26 01:13:01.435345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.931 [2024-07-26 01:13:01.435373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:76344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.931 [2024-07-26 01:13:01.435386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.931 [2024-07-26 01:13:01.435401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:76352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.931 [2024-07-26 01:13:01.435414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.931 [2024-07-26 01:13:01.435428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:76360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.931 [2024-07-26 01:13:01.435441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.931 [2024-07-26 01:13:01.435458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:76368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.931 [2024-07-26 01:13:01.435472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.931 [2024-07-26 01:13:01.435486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:76376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.931 [2024-07-26 01:13:01.435498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.931 [2024-07-26 01:13:01.435512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:76384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.931 [2024-07-26 01:13:01.435525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.931 [2024-07-26 01:13:01.435539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:76392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.931 [2024-07-26 01:13:01.435551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.931 [2024-07-26 01:13:01.435565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:76400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.931 [2024-07-26 01:13:01.435578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.931 [2024-07-26 01:13:01.435592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:76408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.931 [2024-07-26 01:13:01.435604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.931 [2024-07-26 01:13:01.435618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:76416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.931 [2024-07-26 01:13:01.435630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.931 [2024-07-26 01:13:01.435645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:76424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.931 [2024-07-26 01:13:01.435657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.931 [2024-07-26 01:13:01.435671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:76432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.931 [2024-07-26 01:13:01.435684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.931 [2024-07-26 01:13:01.435698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:76440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.931 [2024-07-26 01:13:01.435725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.931 [2024-07-26 01:13:01.435741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:76448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.931 [2024-07-26 01:13:01.435754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.931 [2024-07-26 01:13:01.435768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:76456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.931 [2024-07-26 01:13:01.435781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.931 [2024-07-26 01:13:01.435795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:76464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.931 [2024-07-26 01:13:01.435815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.931 [2024-07-26 01:13:01.435831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.931 [2024-07-26 01:13:01.435844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.931 [2024-07-26 01:13:01.435858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:76480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.931 [2024-07-26 01:13:01.435871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.931 [2024-07-26 01:13:01.435886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:76488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.931 [2024-07-26 01:13:01.435899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.931 [2024-07-26 01:13:01.435914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:76496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.931 [2024-07-26 01:13:01.435926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.932 [2024-07-26 01:13:01.435941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:76504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.932 [2024-07-26 01:13:01.435954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.932 [2024-07-26 01:13:01.435968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:76512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.932 [2024-07-26 01:13:01.435981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.932 [2024-07-26 01:13:01.435995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:76520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.932 [2024-07-26 01:13:01.436008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.932 [2024-07-26 01:13:01.436023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:76528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.932 [2024-07-26 01:13:01.436036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.932 [2024-07-26 01:13:01.436051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:76536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.932 [2024-07-26 01:13:01.436085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.932 [2024-07-26 01:13:01.436102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:76544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.932 [2024-07-26 01:13:01.436116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.932 [2024-07-26 01:13:01.436131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:76552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.932 [2024-07-26 01:13:01.436144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.932 [2024-07-26 01:13:01.436159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:76560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.932 [2024-07-26 01:13:01.436172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.932 [2024-07-26 01:13:01.436187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:76568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.932 [2024-07-26 01:13:01.436205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.932 [2024-07-26 01:13:01.436220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.932 [2024-07-26 01:13:01.436234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.932 [2024-07-26 01:13:01.436249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.932 [2024-07-26 01:13:01.436262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.932 [2024-07-26 01:13:01.436292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.932 [2024-07-26 01:13:01.436306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.932 [2024-07-26 01:13:01.436318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76904 len:8 PRP1 0x0 PRP2 0x0 00:30:45.932 [2024-07-26 01:13:01.436332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.932 [2024-07-26 01:13:01.436405] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a22530 was disconnected and freed. reset controller. 00:30:45.932 [2024-07-26 01:13:01.436424] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:45.932 [2024-07-26 01:13:01.436471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.932 [2024-07-26 01:13:01.436490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.932 [2024-07-26 01:13:01.436504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.932 [2024-07-26 01:13:01.436518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.932 [2024-07-26 01:13:01.436532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.932 [2024-07-26 01:13:01.436545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.932 [2024-07-26 01:13:01.436559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.932 [2024-07-26 01:13:01.436571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.932 [2024-07-26 01:13:01.436584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.932 [2024-07-26 01:13:01.439896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.932 [2024-07-26 01:13:01.439934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a04830 (9): Bad file descriptor 00:30:45.932 [2024-07-26 01:13:01.474880] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:45.932 [2024-07-26 01:13:05.112274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:69024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.932 [2024-07-26 01:13:05.112314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.932 [2024-07-26 01:13:05.112342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:69032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.932 [2024-07-26 01:13:05.112358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.932 [2024-07-26 01:13:05.112389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.932 [2024-07-26 01:13:05.112419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.932 [2024-07-26 01:13:05.112434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:69048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.932 [2024-07-26 01:13:05.112447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.932 [2024-07-26 01:13:05.112461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:69056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.932 [2024-07-26 01:13:05.112474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.932 [2024-07-26 01:13:05.112488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:69064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.932 [2024-07-26 01:13:05.112501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.932 [2024-07-26 01:13:05.112515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:69072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.932 [2024-07-26 01:13:05.112529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.932 [2024-07-26 01:13:05.112543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:69080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.932 [2024-07-26 01:13:05.112556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.932 [2024-07-26 01:13:05.112570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:69088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.932 [2024-07-26 01:13:05.112583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.932 [2024-07-26 01:13:05.112597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:69096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.932 [2024-07-26 01:13:05.112610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.932 [2024-07-26 01:13:05.112624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:69104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.932 [2024-07-26 01:13:05.112637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.932 [2024-07-26 01:13:05.112651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:69112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.932 [2024-07-26 01:13:05.112663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.932 [2024-07-26 01:13:05.112677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:69120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.932 [2024-07-26 01:13:05.112690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.932 [2024-07-26 01:13:05.112704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:69128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.932 [2024-07-26 01:13:05.112717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.932 [2024-07-26 01:13:05.112731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:69136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.932 [2024-07-26 01:13:05.112748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.932 [2024-07-26 01:13:05.112762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:69144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.932 [2024-07-26 01:13:05.112776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.932 [2024-07-26 01:13:05.112790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:69152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.932 [2024-07-26 01:13:05.112802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.932 [2024-07-26 01:13:05.112816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:69160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.932 [2024-07-26 01:13:05.112829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.932 [2024-07-26 01:13:05.112843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:69168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.932 [2024-07-26 01:13:05.112856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.932 [2024-07-26 01:13:05.112870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:69176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.932 [2024-07-26 01:13:05.112883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.932 [2024-07-26 01:13:05.112898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:69184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.933 [2024-07-26 01:13:05.112912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.933 [2024-07-26 01:13:05.112926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:69192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.933 [2024-07-26 01:13:05.112938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.933 [2024-07-26 01:13:05.112952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:69200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.933 [2024-07-26 01:13:05.112965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.933 [2024-07-26 01:13:05.112979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:69208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.933 [2024-07-26 01:13:05.112992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.933 [2024-07-26 01:13:05.113006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:69216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.933 [2024-07-26 01:13:05.113018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.933 [2024-07-26 01:13:05.113033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:69224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.933 [2024-07-26 01:13:05.113069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.933 [2024-07-26 01:13:05.113086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:69232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.933 [2024-07-26 01:13:05.113100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.933 [2024-07-26 01:13:05.113135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:69240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.933 [2024-07-26 01:13:05.113149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.933 [2024-07-26 01:13:05.113165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:69248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.933 [2024-07-26 01:13:05.113178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.933 [2024-07-26 01:13:05.113193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:69256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.933 [2024-07-26 01:13:05.113206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.933 [2024-07-26 01:13:05.113221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:69264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.933 [2024-07-26 01:13:05.113235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.933 [2024-07-26 01:13:05.113250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:69272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.933 [2024-07-26 01:13:05.113263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.933 [2024-07-26 01:13:05.113278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.933 [2024-07-26 01:13:05.113291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.933 [2024-07-26 01:13:05.113307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.933 [2024-07-26 01:13:05.113320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.933 [2024-07-26 01:13:05.113335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:69296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.933 [2024-07-26 01:13:05.113348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.933 [2024-07-26 01:13:05.113363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:69304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.933 [2024-07-26 01:13:05.113376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.933 [2024-07-26 01:13:05.113392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:69312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.933 [2024-07-26 01:13:05.113406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.933 [2024-07-26 01:13:05.113436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:69320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.933 [2024-07-26 01:13:05.113449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.933 [2024-07-26 01:13:05.113463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:69328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.933 [2024-07-26 01:13:05.113476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.933 [2024-07-26 01:13:05.113505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:69352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.933 [2024-07-26 01:13:05.113522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.933 [2024-07-26 01:13:05.113538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:69360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.933 [2024-07-26 01:13:05.113551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.933 [2024-07-26 01:13:05.113566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:69368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.933 [2024-07-26 01:13:05.113579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.933 [2024-07-26 01:13:05.113594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:69376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.933 [2024-07-26 01:13:05.113607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.933 [2024-07-26 01:13:05.113622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:69384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.933 [2024-07-26 01:13:05.113635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.933 [2024-07-26 01:13:05.113649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:69392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.933 [2024-07-26 01:13:05.113662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.933 [2024-07-26 01:13:05.113676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:69400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.933 [2024-07-26 01:13:05.113689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.933 [2024-07-26 01:13:05.113704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:69408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.933 [2024-07-26 01:13:05.113717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.933 [2024-07-26 01:13:05.113731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:69416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.933 [2024-07-26 01:13:05.113744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.933 [2024-07-26 01:13:05.113759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:69424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.933 [2024-07-26 01:13:05.113772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.933 [2024-07-26 01:13:05.113787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:69432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.933 [2024-07-26 01:13:05.113800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.933 [2024-07-26 01:13:05.113814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:69440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.933 [2024-07-26 01:13:05.113826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.933 [2024-07-26 01:13:05.113841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:69448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.933 [2024-07-26 01:13:05.113854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.933 [2024-07-26 01:13:05.113869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:69456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.934 [2024-07-26 01:13:05.113885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.934 [2024-07-26 01:13:05.113901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:69464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.934 [2024-07-26 01:13:05.113914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.934 [2024-07-26 01:13:05.113929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:69472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.934 [2024-07-26 01:13:05.113941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.934 [2024-07-26 01:13:05.113956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:69480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.934 [2024-07-26 01:13:05.113969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.934 [2024-07-26 01:13:05.113983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:69488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.934 [2024-07-26 01:13:05.113997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.934 [2024-07-26 01:13:05.114011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:69496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.934 [2024-07-26 01:13:05.114025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.934 [2024-07-26 01:13:05.114039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:69504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.934 [2024-07-26 01:13:05.114052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.934 [2024-07-26 01:13:05.114089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:69512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.934 [2024-07-26 01:13:05.114104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.934 [2024-07-26 01:13:05.114119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:69520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.934 [2024-07-26 01:13:05.114133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.934 [2024-07-26 01:13:05.114148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:69528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.934 [2024-07-26 01:13:05.114162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.934 [2024-07-26 01:13:05.114177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.934 [2024-07-26 01:13:05.114190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.934 [2024-07-26 01:13:05.114205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:69544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.934 [2024-07-26 01:13:05.114218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.934 [2024-07-26 01:13:05.114233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:69552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.934 [2024-07-26 01:13:05.114247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.934 [2024-07-26 01:13:05.114265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:69560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.934 [2024-07-26 01:13:05.114279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.934 [2024-07-26 01:13:05.114293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:69568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.934 [2024-07-26 01:13:05.114307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.934 [2024-07-26 01:13:05.114322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:69576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.934 [2024-07-26 01:13:05.114335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.934 [2024-07-26 01:13:05.114350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:69584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.934 [2024-07-26 01:13:05.114364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.934 [2024-07-26 01:13:05.114394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:69592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.934 [2024-07-26 01:13:05.114409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.934 [2024-07-26 01:13:05.114423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:69600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.934 [2024-07-26 01:13:05.114436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.934 [2024-07-26 01:13:05.114450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:69608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.934 [2024-07-26 01:13:05.114463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.934 [2024-07-26 01:13:05.114477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:69616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.934 [2024-07-26 01:13:05.114490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.934 [2024-07-26 01:13:05.114505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:69624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.934 [2024-07-26 01:13:05.114517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.934 [2024-07-26 01:13:05.114532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:69632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.934 [2024-07-26 01:13:05.114545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.934 [2024-07-26 01:13:05.114560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:69640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.934 [2024-07-26 01:13:05.114573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.934 [2024-07-26 01:13:05.114588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:69648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.934 [2024-07-26 01:13:05.114601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.934 [2024-07-26 01:13:05.114615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:69656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.934 [2024-07-26 01:13:05.114632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.934 [2024-07-26 01:13:05.114647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.934 [2024-07-26 01:13:05.114660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.934 [2024-07-26 01:13:05.114674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:69672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.934 [2024-07-26 01:13:05.114687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.934 [2024-07-26 01:13:05.114702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:69680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.934 [2024-07-26 01:13:05.114715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.934 [2024-07-26 01:13:05.114730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:69688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.934 [2024-07-26 01:13:05.114743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.934 [2024-07-26 01:13:05.114757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:69696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.934 [2024-07-26 01:13:05.114770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.934 [2024-07-26 01:13:05.114784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:69704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.934 [2024-07-26 01:13:05.114797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.934 [2024-07-26 01:13:05.114812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.934 [2024-07-26 01:13:05.114825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.934 [2024-07-26 01:13:05.114839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:69720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.934 [2024-07-26 01:13:05.114853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.934 [2024-07-26 01:13:05.114867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.934 [2024-07-26 01:13:05.114880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.934 [2024-07-26 01:13:05.114894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:69736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.934 [2024-07-26 01:13:05.114907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.934 [2024-07-26 01:13:05.114921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:69744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.934 [2024-07-26 01:13:05.114935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.934 [2024-07-26 01:13:05.114949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:69752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.934 [2024-07-26 01:13:05.114962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.934 [2024-07-26 01:13:05.114980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:69760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.934 [2024-07-26 01:13:05.114994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.934 [2024-07-26 01:13:05.115009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:69768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.934 [2024-07-26 01:13:05.115021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.935 [2024-07-26 01:13:05.115036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:69776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.935 [2024-07-26 01:13:05.115048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.935 [2024-07-26 01:13:05.115087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.935 [2024-07-26 01:13:05.115129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.935 [2024-07-26 01:13:05.115147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:69792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.935 [2024-07-26 01:13:05.115161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.935 [2024-07-26 01:13:05.115176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:69800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.935 [2024-07-26 01:13:05.115190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.935 [2024-07-26 01:13:05.115204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:69808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.935 [2024-07-26 01:13:05.115218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.935 [2024-07-26 01:13:05.115233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:69816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.935 [2024-07-26 01:13:05.115246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.935 [2024-07-26 01:13:05.115261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:69824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.935 [2024-07-26 01:13:05.115274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.935 [2024-07-26 01:13:05.115289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:69832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.935 [2024-07-26 01:13:05.115302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.935 [2024-07-26 01:13:05.115317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:69840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.935 [2024-07-26 01:13:05.115331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.935 [2024-07-26 01:13:05.115346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:69848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.935 [2024-07-26 01:13:05.115360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.935 [2024-07-26 01:13:05.115375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:69856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.935 [2024-07-26 01:13:05.115402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.935 [2024-07-26 01:13:05.115422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:69864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.935 [2024-07-26 01:13:05.115436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.935 [2024-07-26 01:13:05.115450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:69872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.935 [2024-07-26 01:13:05.115463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.935 [2024-07-26 01:13:05.115478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:69880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.935 [2024-07-26 01:13:05.115491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.935 [2024-07-26 01:13:05.115505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:69888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.935 [2024-07-26 01:13:05.115518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.935 [2024-07-26 01:13:05.115532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:69896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.935 [2024-07-26 01:13:05.115545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.935 [2024-07-26 01:13:05.115559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:69904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.935 [2024-07-26 01:13:05.115572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.935 [2024-07-26 01:13:05.115587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:69912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.935 [2024-07-26 01:13:05.115600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.935 [2024-07-26 01:13:05.115614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:69920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.935 [2024-07-26 01:13:05.115627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.935 [2024-07-26 01:13:05.115641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:69928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.935 [2024-07-26 01:13:05.115654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.935 [2024-07-26 01:13:05.115668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:69936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.935 [2024-07-26 01:13:05.115681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.935 [2024-07-26 01:13:05.115696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:69944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.935 [2024-07-26 01:13:05.115708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.935 [2024-07-26 01:13:05.115723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:69952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.935 [2024-07-26 01:13:05.115736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.935 [2024-07-26 01:13:05.115750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:69960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.935 [2024-07-26 01:13:05.115767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.935 [2024-07-26 01:13:05.115782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:69968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.935 [2024-07-26 01:13:05.115796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.935 [2024-07-26 01:13:05.115811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:69976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.935 [2024-07-26 01:13:05.115824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.935 [2024-07-26 01:13:05.115838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:69984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.935 [2024-07-26 01:13:05.115852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.935 [2024-07-26 01:13:05.115885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.935 [2024-07-26 01:13:05.115902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69992 len:8 PRP1 0x0 PRP2 0x0 00:30:45.935 [2024-07-26 01:13:05.115915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.935 [2024-07-26 01:13:05.115931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.935 [2024-07-26 01:13:05.115943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.935 [2024-07-26 01:13:05.115954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70000 len:8 PRP1 0x0 PRP2 0x0 00:30:45.935 [2024-07-26 01:13:05.115966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.935 [2024-07-26 01:13:05.115978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.935 [2024-07-26 01:13:05.115989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.935 [2024-07-26 01:13:05.116000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70008 len:8 PRP1 0x0 PRP2 0x0 00:30:45.935 [2024-07-26 01:13:05.116012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.935 [2024-07-26 01:13:05.116025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.935 [2024-07-26 01:13:05.116035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.935 [2024-07-26 01:13:05.116068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70016 len:8 PRP1 0x0 PRP2 0x0 00:30:45.935 [2024-07-26 01:13:05.116083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.935 [2024-07-26 01:13:05.116097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.935 [2024-07-26 01:13:05.116108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.935 [2024-07-26 01:13:05.116122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70024 len:8 PRP1 0x0 PRP2 0x0 00:30:45.935 [2024-07-26 01:13:05.116135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.935 [2024-07-26 01:13:05.116149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.935 [2024-07-26 01:13:05.116160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.935 [2024-07-26 01:13:05.116172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70032 len:8 PRP1 0x0 PRP2 0x0 00:30:45.935 [2024-07-26 01:13:05.116189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.935 [2024-07-26 01:13:05.116203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.935 [2024-07-26 01:13:05.116214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.935 [2024-07-26 01:13:05.116226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70040 len:8 PRP1 0x0 PRP2 0x0 00:30:45.935 [2024-07-26 01:13:05.116239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.935 [2024-07-26 01:13:05.116253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.935 [2024-07-26 01:13:05.116264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.935 [2024-07-26 01:13:05.116275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69336 len:8 PRP1 0x0 PRP2 0x0 00:30:45.936 [2024-07-26 01:13:05.116288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.936 [2024-07-26 01:13:05.116302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.936 [2024-07-26 01:13:05.116313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.936 [2024-07-26 01:13:05.116324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69344 len:8 PRP1 0x0 PRP2 0x0 00:30:45.936 [2024-07-26 01:13:05.116337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.936 [2024-07-26 01:13:05.116413] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a31450 was disconnected and freed. reset controller. 00:30:45.936 [2024-07-26 01:13:05.116431] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:30:45.936 [2024-07-26 01:13:05.116464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.936 [2024-07-26 01:13:05.116497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.936 [2024-07-26 01:13:05.116512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.936 [2024-07-26 01:13:05.116527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.936 [2024-07-26 01:13:05.116551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.936 [2024-07-26 01:13:05.116577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.936 [2024-07-26 01:13:05.116597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.936 [2024-07-26 01:13:05.116611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.936 [2024-07-26 01:13:05.116624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.936 [2024-07-26 01:13:05.116665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a04830 (9): Bad file descriptor 00:30:45.936 [2024-07-26 01:13:05.119934] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.936 [2024-07-26 01:13:05.310631] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:45.936 [2024-07-26 01:13:09.667904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:38768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.936 [2024-07-26 01:13:09.667943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.936 [2024-07-26 01:13:09.667978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.936 [2024-07-26 01:13:09.667994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.936 [2024-07-26 01:13:09.668010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:38784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.936 [2024-07-26 01:13:09.668024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.936 [2024-07-26 01:13:09.668038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.936 [2024-07-26 01:13:09.668054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.936 [2024-07-26 01:13:09.668092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:38800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.936 [2024-07-26 01:13:09.668107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.936 [2024-07-26 01:13:09.668122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.936 [2024-07-26 01:13:09.668135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.936 [2024-07-26 01:13:09.668150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.936 [2024-07-26 01:13:09.668162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.936 [2024-07-26 01:13:09.668177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.936 [2024-07-26 01:13:09.668190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.936 [2024-07-26 01:13:09.668204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.936 [2024-07-26 01:13:09.668217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.936 [2024-07-26 01:13:09.668231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:38840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.936 [2024-07-26 01:13:09.668244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.936 [2024-07-26 01:13:09.668259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:38848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.936 [2024-07-26 01:13:09.668273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.936 [2024-07-26 01:13:09.668287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.936 [2024-07-26 01:13:09.668300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.936 [2024-07-26 01:13:09.668314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:38864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.936 [2024-07-26 01:13:09.668327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.936 [2024-07-26 01:13:09.668341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:38872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.936 [2024-07-26 01:13:09.668383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.936 [2024-07-26 01:13:09.668398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.936 [2024-07-26 01:13:09.668411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.936 [2024-07-26 01:13:09.668425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:38888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.936 [2024-07-26 01:13:09.668438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.936 [2024-07-26 01:13:09.668452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:38896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.936 [2024-07-26 01:13:09.668472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.936 [2024-07-26 01:13:09.668487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.936 [2024-07-26 01:13:09.668500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.936 [2024-07-26 01:13:09.668514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:38912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.936 [2024-07-26 01:13:09.668527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.936 [2024-07-26 01:13:09.668541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:38920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.936 [2024-07-26 01:13:09.668553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.936 [2024-07-26 01:13:09.668568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:38928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.936 [2024-07-26 01:13:09.668581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.936 [2024-07-26 01:13:09.668595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.936 [2024-07-26 01:13:09.668607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.936 [2024-07-26 01:13:09.668622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.936 [2024-07-26 01:13:09.668634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.936 [2024-07-26 01:13:09.668649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:38952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.936 [2024-07-26 01:13:09.668661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.936 [2024-07-26 01:13:09.668676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:38960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.936 [2024-07-26 01:13:09.668688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.936 [2024-07-26 01:13:09.668702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:38968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.936 [2024-07-26 01:13:09.668715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.936 [2024-07-26 01:13:09.668733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.936 [2024-07-26 01:13:09.668746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.936 [2024-07-26 01:13:09.668760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.936 [2024-07-26 01:13:09.668773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.936 [2024-07-26 01:13:09.668787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.936 [2024-07-26 01:13:09.668800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.936 [2024-07-26 01:13:09.668815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:39000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.936 [2024-07-26 01:13:09.668827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.936 [2024-07-26 01:13:09.668841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:39008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.936 [2024-07-26 01:13:09.668854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.937 [2024-07-26 01:13:09.668869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:39016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.937 [2024-07-26 01:13:09.668881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.937 [2024-07-26 01:13:09.668895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:39024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.937 [2024-07-26 01:13:09.668909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.937 [2024-07-26 01:13:09.668923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:39032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.937 [2024-07-26 01:13:09.668935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.937 [2024-07-26 01:13:09.668949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:39040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.937 [2024-07-26 01:13:09.668962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.937 [2024-07-26 01:13:09.668976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.937 [2024-07-26 01:13:09.668988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.937 [2024-07-26 01:13:09.669003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:39056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.937 [2024-07-26 01:13:09.669015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.937 [2024-07-26 01:13:09.669029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.937 [2024-07-26 01:13:09.669055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.937 [2024-07-26 01:13:09.669080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:39072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.937 [2024-07-26 01:13:09.669097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.937 [2024-07-26 01:13:09.669113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.937 [2024-07-26 01:13:09.669126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.937 [2024-07-26 01:13:09.669140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.937 [2024-07-26 01:13:09.669153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.937 [2024-07-26 01:13:09.669168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.937 [2024-07-26 01:13:09.669181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.937 [2024-07-26 01:13:09.669195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.937 [2024-07-26 01:13:09.669208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.937 [2024-07-26 01:13:09.669222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.937 [2024-07-26 01:13:09.669235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.937 [2024-07-26 01:13:09.669250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.937 [2024-07-26 01:13:09.669263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.937 [2024-07-26 01:13:09.669283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.937 [2024-07-26 01:13:09.669296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.937 [2024-07-26 01:13:09.669310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.937 [2024-07-26 01:13:09.669324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.937 [2024-07-26 01:13:09.669338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.937 [2024-07-26 01:13:09.669351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.937 [2024-07-26 01:13:09.669389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:39168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.937 [2024-07-26 01:13:09.669402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.937 [2024-07-26 01:13:09.669416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.937 [2024-07-26 01:13:09.669429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.937 [2024-07-26 01:13:09.669443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.937 [2024-07-26 01:13:09.669456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.937 [2024-07-26 01:13:09.669470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.937 [2024-07-26 01:13:09.669486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.937 [2024-07-26 01:13:09.669501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.937 [2024-07-26 01:13:09.669514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.937 [2024-07-26 01:13:09.669528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.937 [2024-07-26 01:13:09.669540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.937 [2024-07-26 01:13:09.669554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.937 [2024-07-26 01:13:09.669567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.937 [2024-07-26 01:13:09.669581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.937 [2024-07-26 01:13:09.669593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.937 [2024-07-26 01:13:09.669607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.937 [2024-07-26 01:13:09.669620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.937 [2024-07-26 01:13:09.669634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.937 [2024-07-26 01:13:09.669647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.937 [2024-07-26 01:13:09.669661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.937 [2024-07-26 01:13:09.669673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.937 [2024-07-26 01:13:09.669687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.937 [2024-07-26 01:13:09.669700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.937 [2024-07-26 01:13:09.669714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.937 [2024-07-26 01:13:09.669727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.937 [2024-07-26 01:13:09.669741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.937 [2024-07-26 01:13:09.669754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.937 [2024-07-26 01:13:09.669768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.937 [2024-07-26 01:13:09.669780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.937 [2024-07-26 01:13:09.669795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.937 [2024-07-26 01:13:09.669807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.937 [2024-07-26 01:13:09.669824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.937 [2024-07-26 01:13:09.669838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.938 [2024-07-26 01:13:09.669852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.938 [2024-07-26 01:13:09.669880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.938 [2024-07-26 01:13:09.669896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.938 [2024-07-26 01:13:09.669909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.938 [2024-07-26 01:13:09.669924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.938 [2024-07-26 01:13:09.669937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.938 [2024-07-26 01:13:09.669952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.938 [2024-07-26 01:13:09.669965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.938 [2024-07-26 01:13:09.669979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.938 [2024-07-26 01:13:09.669992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.938 [2024-07-26 01:13:09.670006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.938 [2024-07-26 01:13:09.670020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.938 [2024-07-26 01:13:09.670034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.938 [2024-07-26 01:13:09.670051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.938 [2024-07-26 01:13:09.670089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.938 [2024-07-26 01:13:09.670105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.938 [2024-07-26 01:13:09.670121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.938 [2024-07-26 01:13:09.670134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.938 [2024-07-26 01:13:09.670149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.938 [2024-07-26 01:13:09.670163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.938 [2024-07-26 01:13:09.670178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.938 [2024-07-26 01:13:09.670192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.938 [2024-07-26 01:13:09.670207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.938 [2024-07-26 01:13:09.670224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.938 [2024-07-26 01:13:09.670239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.938 [2024-07-26 01:13:09.670253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.938 [2024-07-26 01:13:09.670268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.938 [2024-07-26 01:13:09.670281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.938 [2024-07-26 01:13:09.670296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.938 [2024-07-26 01:13:09.670310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.938 [2024-07-26 01:13:09.670325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.938 [2024-07-26 01:13:09.670338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.938 [2024-07-26 01:13:09.670354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.938 [2024-07-26 01:13:09.670372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.938 [2024-07-26 01:13:09.670387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.938 [2024-07-26 01:13:09.670401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.938 [2024-07-26 01:13:09.670416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.938 [2024-07-26 01:13:09.670429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.938 [2024-07-26 01:13:09.670445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.938 [2024-07-26 01:13:09.670458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.938 [2024-07-26 01:13:09.670474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.938 [2024-07-26 01:13:09.670487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.938 [2024-07-26 01:13:09.670502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.938 [2024-07-26 01:13:09.670529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.938 [2024-07-26 01:13:09.670563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.938 [2024-07-26 01:13:09.670579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39480 len:8 PRP1 0x0 PRP2 0x0 00:30:45.938 [2024-07-26 01:13:09.670593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.938 [2024-07-26 01:13:09.670610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.938 [2024-07-26 01:13:09.670621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.938 [2024-07-26 01:13:09.670636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39488 len:8 PRP1 0x0 PRP2 0x0 00:30:45.938 [2024-07-26 01:13:09.670649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.938 [2024-07-26 01:13:09.670662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.938 [2024-07-26 01:13:09.670673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.938 [2024-07-26 01:13:09.670684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39496 len:8 PRP1 0x0 PRP2 0x0 00:30:45.938 [2024-07-26 01:13:09.670696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.938 [2024-07-26 01:13:09.670708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.938 [2024-07-26 01:13:09.670721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.938 [2024-07-26 01:13:09.670733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39504 len:8 PRP1 0x0 PRP2 0x0 00:30:45.938 [2024-07-26 01:13:09.670746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.938 [2024-07-26 01:13:09.670759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.938 [2024-07-26 01:13:09.670770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.938 [2024-07-26 01:13:09.670781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39512 len:8 PRP1 0x0 PRP2 0x0 00:30:45.938 [2024-07-26 01:13:09.670794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.938 [2024-07-26 01:13:09.670811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.938 [2024-07-26 01:13:09.670823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.938 [2024-07-26 01:13:09.670834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39520 len:8 PRP1 0x0 PRP2 0x0 00:30:45.938 [2024-07-26 01:13:09.670847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.938 [2024-07-26 01:13:09.670860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.938 [2024-07-26 01:13:09.670871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.938 [2024-07-26 01:13:09.670882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39528 len:8 PRP1 0x0 PRP2 0x0 00:30:45.938 [2024-07-26 01:13:09.670894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.938 [2024-07-26 01:13:09.670907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.938 [2024-07-26 01:13:09.670918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.938 [2024-07-26 01:13:09.670930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39536 len:8 PRP1 0x0 PRP2 0x0 00:30:45.938 [2024-07-26 01:13:09.670942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.938 [2024-07-26 01:13:09.670955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.938 [2024-07-26 01:13:09.670966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.938 [2024-07-26 01:13:09.670977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39544 len:8 PRP1 0x0 PRP2 0x0 00:30:45.938 [2024-07-26 01:13:09.670989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.938 [2024-07-26 01:13:09.671002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.938 [2024-07-26 01:13:09.671016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.938 [2024-07-26 01:13:09.671027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39552 len:8 PRP1 0x0 PRP2 0x0 00:30:45.938 [2024-07-26 01:13:09.671040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.938 [2024-07-26 01:13:09.671053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.938 [2024-07-26 01:13:09.671088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.939 [2024-07-26 01:13:09.671101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39560 len:8 PRP1 0x0 PRP2 0x0 00:30:45.939 [2024-07-26 01:13:09.671115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.939 [2024-07-26 01:13:09.671129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.939 [2024-07-26 01:13:09.671146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.939 [2024-07-26 01:13:09.671157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39568 len:8 PRP1 0x0 PRP2 0x0 00:30:45.939 [2024-07-26 01:13:09.671170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.939 [2024-07-26 01:13:09.671182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.939 [2024-07-26 01:13:09.671194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.939 [2024-07-26 01:13:09.671205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39576 len:8 PRP1 0x0 PRP2 0x0 00:30:45.939 [2024-07-26 01:13:09.671217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.939 [2024-07-26 01:13:09.671235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.939 [2024-07-26 01:13:09.671246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.939 [2024-07-26 01:13:09.671257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39584 len:8 PRP1 0x0 PRP2 0x0 00:30:45.939 [2024-07-26 01:13:09.671270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.939 [2024-07-26 01:13:09.671283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.939 [2024-07-26 01:13:09.671294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.939 [2024-07-26 01:13:09.671305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39592 len:8 PRP1 0x0 PRP2 0x0 00:30:45.939 [2024-07-26 01:13:09.671318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.939 [2024-07-26 01:13:09.671331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.939 [2024-07-26 01:13:09.671342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.939 [2024-07-26 01:13:09.671359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39600 len:8 PRP1 0x0 PRP2 0x0 00:30:45.939 [2024-07-26 01:13:09.671387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.939 [2024-07-26 01:13:09.671401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.939 [2024-07-26 01:13:09.671411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.939 [2024-07-26 01:13:09.671423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39608 len:8 PRP1 0x0 PRP2 0x0 00:30:45.939 [2024-07-26 01:13:09.671435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.939 [2024-07-26 01:13:09.671451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.939 [2024-07-26 01:13:09.671462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.939 [2024-07-26 01:13:09.671473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39616 len:8 PRP1 0x0 PRP2 0x0 00:30:45.939 [2024-07-26 01:13:09.671485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.939 [2024-07-26 01:13:09.671498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.939 [2024-07-26 01:13:09.671508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.939 [2024-07-26 01:13:09.671518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39624 len:8 PRP1 0x0 PRP2 0x0 00:30:45.939 [2024-07-26 01:13:09.671530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.939 [2024-07-26 01:13:09.671543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.939 [2024-07-26 01:13:09.671559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.939 [2024-07-26 01:13:09.671570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39632 len:8 PRP1 0x0 PRP2 0x0 00:30:45.939 [2024-07-26 01:13:09.671582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.939 [2024-07-26 01:13:09.671595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.939 [2024-07-26 01:13:09.671606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.939 [2024-07-26 01:13:09.671617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39640 len:8 PRP1 0x0 PRP2 0x0 00:30:45.939 [2024-07-26 01:13:09.671629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.939 [2024-07-26 01:13:09.671646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.939 [2024-07-26 01:13:09.671657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.939 [2024-07-26 01:13:09.671668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39648 len:8 PRP1 0x0 PRP2 0x0 00:30:45.939 [2024-07-26 01:13:09.671680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.939 [2024-07-26 01:13:09.671693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.939 [2024-07-26 01:13:09.671703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.939 [2024-07-26 01:13:09.671714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39656 len:8 PRP1 0x0 PRP2 0x0 00:30:45.939 [2024-07-26 01:13:09.671726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.939 [2024-07-26 01:13:09.671738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.939 [2024-07-26 01:13:09.671749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.939 [2024-07-26 01:13:09.671760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39664 len:8 PRP1 0x0 PRP2 0x0 00:30:45.939 [2024-07-26 01:13:09.671772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.939 [2024-07-26 01:13:09.671784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.939 [2024-07-26 01:13:09.671794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.939 [2024-07-26 01:13:09.671805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39672 len:8 PRP1 0x0 PRP2 0x0 00:30:45.939 [2024-07-26 01:13:09.671821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.939 [2024-07-26 01:13:09.671834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.939 [2024-07-26 01:13:09.671845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.939 [2024-07-26 01:13:09.671856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39680 len:8 PRP1 0x0 PRP2 0x0 00:30:45.939 [2024-07-26 01:13:09.671868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.939 [2024-07-26 01:13:09.671881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.939 [2024-07-26 01:13:09.671891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.939 [2024-07-26 01:13:09.671902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39688 len:8 PRP1 0x0 PRP2 0x0 00:30:45.939 [2024-07-26 01:13:09.671914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.939 [2024-07-26 01:13:09.671927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.939 [2024-07-26 01:13:09.671938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.939 [2024-07-26 01:13:09.671949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39696 len:8 PRP1 0x0 PRP2 0x0 00:30:45.939 [2024-07-26 01:13:09.671961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.939 [2024-07-26 01:13:09.671974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.939 [2024-07-26 01:13:09.671984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.939 [2024-07-26 01:13:09.671995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39704 len:8 PRP1 0x0 PRP2 0x0 00:30:45.939 [2024-07-26 01:13:09.672007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.939 [2024-07-26 01:13:09.672024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.939 [2024-07-26 01:13:09.672035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.939 [2024-07-26 01:13:09.672053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39712 len:8 PRP1 0x0 PRP2 0x0 00:30:45.939 [2024-07-26 01:13:09.672086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.939 [2024-07-26 01:13:09.672101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.939 [2024-07-26 01:13:09.672112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.939 [2024-07-26 01:13:09.672123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39720 len:8 PRP1 0x0 PRP2 0x0 00:30:45.939 [2024-07-26 01:13:09.672136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.939 [2024-07-26 01:13:09.672149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.939 [2024-07-26 01:13:09.672160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.939 [2024-07-26 01:13:09.672171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39728 len:8 PRP1 0x0 PRP2 0x0 00:30:45.939 [2024-07-26 01:13:09.672183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.939 [2024-07-26 01:13:09.672196] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.939 [2024-07-26 01:13:09.672207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.939 [2024-07-26 01:13:09.672222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39736 len:8 PRP1 0x0 PRP2 0x0 00:30:45.939 [2024-07-26 01:13:09.672235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.939 [2024-07-26 01:13:09.672248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.939 [2024-07-26 01:13:09.672259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.939 [2024-07-26 01:13:09.672270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39744 len:8 PRP1 0x0 PRP2 0x0 00:30:45.940 [2024-07-26 01:13:09.672283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.940 [2024-07-26 01:13:09.672295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.940 [2024-07-26 01:13:09.672306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.940 [2024-07-26 01:13:09.672317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39752 len:8 PRP1 0x0 PRP2 0x0 00:30:45.940 [2024-07-26 01:13:09.672330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.940 [2024-07-26 01:13:09.672343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.940 [2024-07-26 01:13:09.672360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.940 [2024-07-26 01:13:09.672386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39760 len:8 PRP1 0x0 PRP2 0x0 00:30:45.940 [2024-07-26 01:13:09.672398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.940 [2024-07-26 01:13:09.672412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.940 [2024-07-26 01:13:09.672422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.940 [2024-07-26 01:13:09.672433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39768 len:8 PRP1 0x0 PRP2 0x0 00:30:45.940 [2024-07-26 01:13:09.672445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.940 [2024-07-26 01:13:09.672462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.940 [2024-07-26 01:13:09.672473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.940 [2024-07-26 01:13:09.672484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39776 len:8 PRP1 0x0 PRP2 0x0 00:30:45.940 [2024-07-26 01:13:09.672496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.940 [2024-07-26 01:13:09.672508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.940 [2024-07-26 01:13:09.672519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.940 [2024-07-26 01:13:09.672530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39784 len:8 PRP1 0x0 PRP2 0x0 00:30:45.940 [2024-07-26 01:13:09.672542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.940 [2024-07-26 01:13:09.672554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.940 [2024-07-26 01:13:09.672564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.940 [2024-07-26 01:13:09.672575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39080 len:8 PRP1 0x0 PRP2 0x0 00:30:45.940 [2024-07-26 01:13:09.672592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.940 [2024-07-26 01:13:09.672609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.940 [2024-07-26 01:13:09.672620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.940 [2024-07-26 01:13:09.672631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39088 len:8 PRP1 0x0 PRP2 0x0 00:30:45.940 [2024-07-26 01:13:09.672643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.940 [2024-07-26 01:13:09.672699] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a33fe0 was disconnected and freed. reset controller. 00:30:45.940 [2024-07-26 01:13:09.672717] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:30:45.940 [2024-07-26 01:13:09.672764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.940 [2024-07-26 01:13:09.672783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.940 [2024-07-26 01:13:09.672797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.940 [2024-07-26 01:13:09.672811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.940 [2024-07-26 01:13:09.672825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.940 [2024-07-26 01:13:09.672838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.940 [2024-07-26 01:13:09.672851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:45.940 [2024-07-26 01:13:09.672864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.940 [2024-07-26 01:13:09.672877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.940 [2024-07-26 01:13:09.672917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a04830 (9): Bad file descriptor 00:30:45.940 [2024-07-26 01:13:09.676191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.940 [2024-07-26 01:13:09.749963] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:45.940 00:30:45.940 Latency(us) 00:30:45.940 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:45.940 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:45.940 Verification LBA range: start 0x0 length 0x4000 00:30:45.940 NVMe0n1 : 15.00 8424.12 32.91 763.91 0.00 13903.77 543.10 19029.71 00:30:45.940 =================================================================================================================== 00:30:45.940 Total : 8424.12 32.91 763.91 0.00 13903.77 543.10 19029.71 00:30:45.940 Received shutdown signal, test time was about 15.000000 seconds 00:30:45.940 00:30:45.940 Latency(us) 00:30:45.940 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:45.940 =================================================================================================================== 00:30:45.940 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:45.940 01:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:45.940 01:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:30:45.940 01:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:30:45.940 01:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1944305 00:30:45.940 01:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:45.940 01:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1944305 /var/tmp/bdevperf.sock 00:30:45.940 01:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1944305 ']' 00:30:45.940 01:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:45.940 01:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:45.940 01:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:45.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:45.940 01:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:45.940 01:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:45.940 01:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:45.940 01:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:30:45.940 01:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:45.940 [2024-07-26 01:13:16.114802] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:45.940 01:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:46.201 [2024-07-26 01:13:16.363517] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:46.201 01:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:46.460 NVMe0n1 00:30:46.460 01:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:47.027 00:30:47.027 01:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:47.285 00:30:47.285 01:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:47.285 01:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:30:47.543 01:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:47.801 01:13:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:30:51.090 01:13:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:51.090 01:13:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:30:51.090 01:13:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1944976 00:30:51.090 01:13:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:51.090 01:13:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1944976 00:30:52.468 0 00:30:52.468 01:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:52.468 [2024-07-26 01:13:15.623734] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:30:52.468 [2024-07-26 01:13:15.623821] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1944305 ] 00:30:52.468 EAL: No free 2048 kB hugepages reported on node 1 00:30:52.468 [2024-07-26 01:13:15.683854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.468 [2024-07-26 01:13:15.768591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:52.468 [2024-07-26 01:13:18.114351] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:52.468 [2024-07-26 01:13:18.114485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:52.468 [2024-07-26 01:13:18.114508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.468 [2024-07-26 01:13:18.114527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:52.469 [2024-07-26 01:13:18.114540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.469 [2024-07-26 01:13:18.114555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:52.469 [2024-07-26 01:13:18.114568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.469 [2024-07-26 01:13:18.114582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:52.469 [2024-07-26 01:13:18.114595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.469 [2024-07-26 01:13:18.114609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:52.469 [2024-07-26 01:13:18.114662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:52.469 [2024-07-26 01:13:18.114697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x605830 (9): Bad file descriptor 00:30:52.469 [2024-07-26 01:13:18.159498] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:52.469 Running I/O for 1 seconds... 00:30:52.469 00:30:52.469 Latency(us) 00:30:52.469 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:52.469 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:52.469 Verification LBA range: start 0x0 length 0x4000 00:30:52.469 NVMe0n1 : 1.01 8622.85 33.68 0.00 0.00 14785.79 3155.44 12233.39 00:30:52.469 =================================================================================================================== 00:30:52.469 Total : 8622.85 33.68 0.00 0.00 14785.79 3155.44 12233.39 00:30:52.469 01:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:52.469 01:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:30:52.469 01:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:52.726 01:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:52.726 01:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:30:52.983 01:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:53.241 01:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:30:56.526 01:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:56.526 01:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:30:56.526 01:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1944305 00:30:56.526 01:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1944305 ']' 00:30:56.526 01:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1944305 00:30:56.526 01:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:30:56.526 01:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:56.526 01:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1944305 00:30:56.526 01:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:56.526 01:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:56.526 01:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1944305' 00:30:56.526 killing process with pid 1944305 00:30:56.526 01:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1944305 00:30:56.526 01:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1944305 00:30:56.784 01:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:30:56.784 01:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:57.042 01:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:30:57.042 01:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:57.042 01:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:30:57.042 01:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:57.042 01:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:30:57.042 01:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:57.042 01:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:30:57.042 01:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:57.042 01:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:57.042 rmmod nvme_tcp 00:30:57.042 rmmod nvme_fabrics 00:30:57.042 rmmod nvme_keyring 00:30:57.042 01:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:57.042 01:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:30:57.042 01:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:30:57.042 01:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1942128 ']' 00:30:57.042 01:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1942128 00:30:57.042 01:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1942128 ']' 00:30:57.042 01:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1942128 00:30:57.042 01:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:30:57.042 01:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:57.042 01:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1942128 00:30:57.042 01:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:57.042 01:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:57.042 01:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1942128' 00:30:57.042 killing process with pid 1942128 00:30:57.042 01:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1942128 00:30:57.042 01:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1942128 00:30:57.301 01:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:57.301 01:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:57.301 01:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:57.301 01:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:57.301 01:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:57.301 01:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:57.301 01:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:57.301 01:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.836 01:13:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:59.836 00:30:59.836 real 0m34.943s 00:30:59.836 user 2m3.656s 00:30:59.836 sys 0m5.851s 00:30:59.836 01:13:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:59.836 01:13:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:59.836 ************************************ 00:30:59.836 END TEST nvmf_failover 00:30:59.836 ************************************ 00:30:59.836 01:13:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:59.836 01:13:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:59.836 01:13:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:59.836 01:13:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.836 ************************************ 00:30:59.836 START TEST nvmf_host_discovery 00:30:59.837 ************************************ 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:59.837 * Looking for test storage... 00:30:59.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:30:59.837 01:13:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:01.239 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:01.239 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:01.239 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:01.239 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:01.239 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:01.240 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:01.240 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:01.240 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:01.240 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:01.240 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:01.240 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:01.240 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:01.240 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:01.240 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:01.240 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:01.240 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:01.240 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:01.240 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:01.240 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:01.498 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:01.498 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:01.498 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:01.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:01.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:31:01.498 00:31:01.498 --- 10.0.0.2 ping statistics --- 00:31:01.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.498 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:31:01.498 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:01.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:01.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:31:01.498 00:31:01.498 --- 10.0.0.1 ping statistics --- 00:31:01.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.498 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:31:01.498 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:01.498 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:31:01.498 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:01.498 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:01.498 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:01.498 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:01.498 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:01.498 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:01.498 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:01.498 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:01.498 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:01.498 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:01.498 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:01.498 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1947573 00:31:01.498 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:01.498 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1947573 00:31:01.498 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1947573 ']' 00:31:01.498 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:01.498 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:01.498 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:01.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:01.498 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:01.498 01:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:01.498 [2024-07-26 01:13:31.747269] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:31:01.498 [2024-07-26 01:13:31.747354] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:01.498 EAL: No free 2048 kB hugepages reported on node 1 00:31:01.498 [2024-07-26 01:13:31.815950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:01.498 [2024-07-26 01:13:31.912488] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:01.498 [2024-07-26 01:13:31.912552] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:01.498 [2024-07-26 01:13:31.912569] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:01.498 [2024-07-26 01:13:31.912583] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:01.498 [2024-07-26 01:13:31.912595] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:01.498 [2024-07-26 01:13:31.912624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:01.757 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:01.757 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:31:01.757 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:01.757 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:01.757 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:01.757 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:01.757 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:01.757 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.757 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:01.757 [2024-07-26 01:13:32.057812] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:01.757 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.757 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:01.757 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.757 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:01.757 [2024-07-26 01:13:32.066041] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:01.757 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.757 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:01.757 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.757 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:01.757 null0 00:31:01.757 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.757 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:01.757 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.757 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:01.757 null1 00:31:01.757 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.757 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:01.757 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.757 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:01.757 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.757 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1947719 00:31:01.757 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:01.757 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1947719 /tmp/host.sock 00:31:01.757 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1947719 ']' 00:31:01.757 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:31:01.757 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:01.757 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:01.757 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:01.757 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:01.757 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:01.757 [2024-07-26 01:13:32.138993] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:31:01.757 [2024-07-26 01:13:32.139084] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1947719 ] 00:31:01.757 EAL: No free 2048 kB hugepages reported on node 1 00:31:02.015 [2024-07-26 01:13:32.199562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.015 [2024-07-26 01:13:32.290318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.015 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:02.015 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:31:02.015 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:02.015 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:02.015 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.015 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:02.015 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.015 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:02.015 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.015 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:02.015 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.015 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:31:02.015 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:31:02.015 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:02.015 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:02.015 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.015 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:02.015 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:02.015 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:02.015 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.273 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:02.273 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:31:02.273 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:02.273 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:02.273 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.273 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:02.273 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:02.273 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:02.273 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.273 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:31:02.273 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:02.273 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.273 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:02.273 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.273 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:31:02.273 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:02.273 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:02.273 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.273 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:02.273 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:02.273 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:02.273 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.273 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:02.273 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:31:02.273 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:02.273 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:02.273 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.273 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:02.273 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:02.273 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:02.273 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.273 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:31:02.273 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:02.273 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.273 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:02.273 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.274 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:31:02.274 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:02.274 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:02.274 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.274 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:02.274 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:02.274 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:02.274 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.274 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:31:02.274 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:31:02.274 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:02.274 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.274 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:02.274 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:02.274 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:02.274 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:02.274 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.532 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:02.532 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:02.532 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.532 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:02.532 [2024-07-26 01:13:32.715780] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:02.532 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.532 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:31:02.532 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:02.532 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:02.532 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.532 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:02.532 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:02.532 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:02.532 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.532 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:31:02.532 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:31:02.532 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:02.532 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.532 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:02.532 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:02.532 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:02.532 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:02.532 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.532 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:31:02.532 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:31:02.532 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:02.532 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:02.532 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:02.532 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:31:02.532 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:31:02.532 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:02.533 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:31:02.533 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:02.533 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.533 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:02.533 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:02.533 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.533 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:02.533 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:31:02.533 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:31:02.533 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:31:02.533 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:02.533 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.533 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:02.533 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.533 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:02.533 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:02.533 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:31:02.533 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:31:02.533 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:02.533 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:31:02.533 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:02.533 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:02.533 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.533 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:02.533 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:02.533 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:02.533 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.533 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:31:02.533 01:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:31:03.099 [2024-07-26 01:13:33.424923] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:03.099 [2024-07-26 01:13:33.424950] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:03.099 [2024-07-26 01:13:33.424982] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:03.099 [2024-07-26 01:13:33.513264] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:03.360 [2024-07-26 01:13:33.698206] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:03.360 [2024-07-26 01:13:33.698235] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:03.618 01:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:31:03.618 01:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:03.618 01:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:31:03.618 01:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:03.618 01:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:03.618 01:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.618 01:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:03.618 01:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:03.618 01:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:03.618 01:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.618 01:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:03.618 01:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:31:03.618 01:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:03.618 01:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:03.618 01:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:31:03.618 01:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:31:03.618 01:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:31:03.618 01:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:31:03.618 01:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:03.618 01:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.618 01:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:03.618 01:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:03.618 01:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:03.618 01:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:03.618 01:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.618 01:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:03.618 01:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:31:03.618 01:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:03.618 01:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:03.618 01:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:31:03.618 01:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:31:03.618 01:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:31:03.618 01:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:31:03.618 01:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:03.618 01:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:03.618 01:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.618 01:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:03.618 01:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:03.618 01:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:03.618 01:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.618 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:31:03.618 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:31:03.618 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:31:03.618 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:03.618 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:03.618 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:03.618 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:31:03.618 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:31:03.618 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:03.618 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:31:03.618 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:03.618 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:03.618 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.618 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:03.618 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.875 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:03.875 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:31:03.875 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:31:03.875 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:31:03.875 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:03.875 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.875 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:03.875 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.875 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:03.875 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:03.875 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:31:03.875 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:31:03.875 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:03.875 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:31:03.875 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:03.875 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:03.876 [2024-07-26 01:13:34.164262] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:03.876 [2024-07-26 01:13:34.164598] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:03.876 [2024-07-26 01:13:34.164628] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.876 [2024-07-26 01:13:34.250888] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:03.876 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.135 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:31:04.135 01:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:31:04.135 [2024-07-26 01:13:34.554279] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:04.135 [2024-07-26 01:13:34.554303] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:04.135 [2024-07-26 01:13:34.554312] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.072 [2024-07-26 01:13:35.401002] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:05.072 [2024-07-26 01:13:35.401042] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:05.072 [2024-07-26 01:13:35.409550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.072 [2024-07-26 01:13:35.409585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.072 [2024-07-26 01:13:35.409617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.072 [2024-07-26 01:13:35.409631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.072 [2024-07-26 01:13:35.409646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.072 [2024-07-26 01:13:35.409660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.072 [2024-07-26 01:13:35.409674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.072 [2024-07-26 01:13:35.409688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.072 [2024-07-26 01:13:35.409702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1869550 is same with the state(5) to be set 00:31:05.072 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.072 [2024-07-26 01:13:35.419543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1869550 (9): Bad file descriptor 00:31:05.072 [2024-07-26 01:13:35.429585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:05.072 [2024-07-26 01:13:35.429873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.072 [2024-07-26 01:13:35.429908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1869550 with addr=10.0.0.2, port=4420 00:31:05.072 [2024-07-26 01:13:35.429926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1869550 is same with the state(5) to be set 00:31:05.072 [2024-07-26 01:13:35.429949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1869550 (9): Bad file descriptor 00:31:05.072 [2024-07-26 01:13:35.429970] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:05.073 [2024-07-26 01:13:35.429984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:05.073 [2024-07-26 01:13:35.429999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:05.073 [2024-07-26 01:13:35.430019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:05.073 [2024-07-26 01:13:35.439676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:05.073 [2024-07-26 01:13:35.439861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.073 [2024-07-26 01:13:35.439888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1869550 with addr=10.0.0.2, port=4420 00:31:05.073 [2024-07-26 01:13:35.439903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1869550 is same with the state(5) to be set 00:31:05.073 [2024-07-26 01:13:35.439926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1869550 (9): Bad file descriptor 00:31:05.073 [2024-07-26 01:13:35.439946] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:05.073 [2024-07-26 01:13:35.439959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:05.073 [2024-07-26 01:13:35.439972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:05.073 [2024-07-26 01:13:35.439990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:05.073 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:05.073 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:31:05.073 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:05.073 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:05.073 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:31:05.073 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:31:05.073 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:05.073 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:31:05.073 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:05.073 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:05.073 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.073 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.073 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:05.073 [2024-07-26 01:13:35.449760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:05.073 [2024-07-26 01:13:35.449945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.073 [2024-07-26 01:13:35.449973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1869550 with addr=10.0.0.2, port=4420 00:31:05.073 [2024-07-26 01:13:35.449995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1869550 is same with the state(5) to be set 00:31:05.073 [2024-07-26 01:13:35.450020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1869550 (9): Bad file descriptor 00:31:05.073 [2024-07-26 01:13:35.450041] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:05.073 [2024-07-26 01:13:35.450066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:05.073 [2024-07-26 01:13:35.450081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:05.073 [2024-07-26 01:13:35.450100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:05.073 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:05.073 [2024-07-26 01:13:35.459834] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:05.073 [2024-07-26 01:13:35.460033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.073 [2024-07-26 01:13:35.460067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1869550 with addr=10.0.0.2, port=4420 00:31:05.073 [2024-07-26 01:13:35.460085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1869550 is same with the state(5) to be set 00:31:05.073 [2024-07-26 01:13:35.460107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1869550 (9): Bad file descriptor 00:31:05.073 [2024-07-26 01:13:35.460127] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:05.073 [2024-07-26 01:13:35.460141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:05.073 [2024-07-26 01:13:35.460154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:05.073 [2024-07-26 01:13:35.460173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:05.073 [2024-07-26 01:13:35.469921] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:05.073 [2024-07-26 01:13:35.470145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.073 [2024-07-26 01:13:35.470173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1869550 with addr=10.0.0.2, port=4420 00:31:05.073 [2024-07-26 01:13:35.470189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1869550 is same with the state(5) to be set 00:31:05.073 [2024-07-26 01:13:35.470211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1869550 (9): Bad file descriptor 00:31:05.073 [2024-07-26 01:13:35.470231] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:05.073 [2024-07-26 01:13:35.470245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:05.073 [2024-07-26 01:13:35.470258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:05.073 [2024-07-26 01:13:35.470288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:05.073 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.073 [2024-07-26 01:13:35.480004] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:05.073 [2024-07-26 01:13:35.480203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.073 [2024-07-26 01:13:35.480230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1869550 with addr=10.0.0.2, port=4420 00:31:05.073 [2024-07-26 01:13:35.480246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1869550 is same with the state(5) to be set 00:31:05.073 [2024-07-26 01:13:35.480267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1869550 (9): Bad file descriptor 00:31:05.073 [2024-07-26 01:13:35.480305] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:05.073 [2024-07-26 01:13:35.480323] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:05.073 [2024-07-26 01:13:35.480337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:05.073 [2024-07-26 01:13:35.480355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:05.073 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:05.073 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:31:05.073 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:05.073 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:05.073 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:31:05.073 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:31:05.073 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:31:05.073 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:31:05.073 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:05.073 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:05.073 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.073 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:05.073 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.073 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:05.073 [2024-07-26 01:13:35.488326] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:05.073 [2024-07-26 01:13:35.488370] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:05.073 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.333 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:31:05.333 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:31:05.333 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:31:05.333 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:05.333 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:05.333 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:05.333 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:31:05.333 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:31:05.333 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:05.333 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.334 01:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.713 [2024-07-26 01:13:36.765767] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:06.713 [2024-07-26 01:13:36.765799] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:06.713 [2024-07-26 01:13:36.765825] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:06.713 [2024-07-26 01:13:36.893244] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:06.972 [2024-07-26 01:13:37.203211] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:06.972 [2024-07-26 01:13:37.203277] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:06.972 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.972 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:06.972 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:31:06.972 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:06.972 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:06.972 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:06.972 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:06.972 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:06.972 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.973 request: 00:31:06.973 { 00:31:06.973 "name": "nvme", 00:31:06.973 "trtype": "tcp", 00:31:06.973 "traddr": "10.0.0.2", 00:31:06.973 "adrfam": "ipv4", 00:31:06.973 "trsvcid": "8009", 00:31:06.973 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:06.973 "wait_for_attach": true, 00:31:06.973 "method": "bdev_nvme_start_discovery", 00:31:06.973 "req_id": 1 00:31:06.973 } 00:31:06.973 Got JSON-RPC error response 00:31:06.973 response: 00:31:06.973 { 00:31:06.973 "code": -17, 00:31:06.973 "message": "File exists" 00:31:06.973 } 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.973 request: 00:31:06.973 { 00:31:06.973 "name": "nvme_second", 00:31:06.973 "trtype": "tcp", 00:31:06.973 "traddr": "10.0.0.2", 00:31:06.973 "adrfam": "ipv4", 00:31:06.973 "trsvcid": "8009", 00:31:06.973 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:06.973 "wait_for_attach": true, 00:31:06.973 "method": "bdev_nvme_start_discovery", 00:31:06.973 "req_id": 1 00:31:06.973 } 00:31:06.973 Got JSON-RPC error response 00:31:06.973 response: 00:31:06.973 { 00:31:06.973 "code": -17, 00:31:06.973 "message": "File exists" 00:31:06.973 } 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:06.973 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:07.232 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.232 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:07.232 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:07.232 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:31:07.232 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:07.232 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:07.232 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:07.232 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:07.232 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:07.232 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:07.232 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.232 01:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.171 [2024-07-26 01:13:38.426740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.171 [2024-07-26 01:13:38.426831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1884c50 with addr=10.0.0.2, port=8010 00:31:08.171 [2024-07-26 01:13:38.426862] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:08.171 [2024-07-26 01:13:38.426876] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:08.171 [2024-07-26 01:13:38.426904] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:09.110 [2024-07-26 01:13:39.429160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.110 [2024-07-26 01:13:39.429227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1886700 with addr=10.0.0.2, port=8010 00:31:09.110 [2024-07-26 01:13:39.429257] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:09.110 [2024-07-26 01:13:39.429271] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:09.110 [2024-07-26 01:13:39.429283] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:10.047 [2024-07-26 01:13:40.431342] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:10.047 request: 00:31:10.047 { 00:31:10.047 "name": "nvme_second", 00:31:10.047 "trtype": "tcp", 00:31:10.047 "traddr": "10.0.0.2", 00:31:10.047 "adrfam": "ipv4", 00:31:10.047 "trsvcid": "8010", 00:31:10.047 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:10.047 "wait_for_attach": false, 00:31:10.047 "attach_timeout_ms": 3000, 00:31:10.047 "method": "bdev_nvme_start_discovery", 00:31:10.047 "req_id": 1 00:31:10.047 } 00:31:10.047 Got JSON-RPC error response 00:31:10.047 response: 00:31:10.047 { 00:31:10.047 "code": -110, 00:31:10.047 "message": "Connection timed out" 00:31:10.047 } 00:31:10.047 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:10.047 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:31:10.047 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:10.047 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:10.047 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:10.047 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:31:10.047 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:10.047 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:10.047 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.047 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.047 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:10.047 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:10.047 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.306 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:31:10.306 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:31:10.306 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1947719 00:31:10.306 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:31:10.306 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:10.306 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:31:10.306 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:10.306 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:31:10.306 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:10.306 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:10.306 rmmod nvme_tcp 00:31:10.306 rmmod nvme_fabrics 00:31:10.306 rmmod nvme_keyring 00:31:10.306 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:10.306 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:31:10.306 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:31:10.306 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1947573 ']' 00:31:10.306 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1947573 00:31:10.306 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 1947573 ']' 00:31:10.306 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 1947573 00:31:10.306 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:31:10.306 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:10.307 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1947573 00:31:10.307 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:10.307 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:10.307 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1947573' 00:31:10.307 killing process with pid 1947573 00:31:10.307 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 1947573 00:31:10.307 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 1947573 00:31:10.565 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:10.565 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:10.565 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:10.565 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:10.565 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:10.565 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:10.565 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:10.565 01:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:12.469 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:12.469 00:31:12.469 real 0m13.174s 00:31:12.469 user 0m19.449s 00:31:12.469 sys 0m2.679s 00:31:12.469 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:12.469 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.469 ************************************ 00:31:12.469 END TEST nvmf_host_discovery 00:31:12.469 ************************************ 00:31:12.469 01:13:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:12.469 01:13:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:12.469 01:13:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:12.469 01:13:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.727 ************************************ 00:31:12.727 START TEST nvmf_host_multipath_status 00:31:12.727 ************************************ 00:31:12.727 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:12.727 * Looking for test storage... 00:31:12.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:12.727 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:12.727 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:31:12.727 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:12.727 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:12.727 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:12.727 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:12.727 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:12.727 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:12.727 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:12.727 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:12.727 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:12.727 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:12.727 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:12.727 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:12.727 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:12.727 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:12.727 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:12.727 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:12.727 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:12.727 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:12.727 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:12.728 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:12.728 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.728 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.728 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.728 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:31:12.728 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.728 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:31:12.728 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:12.728 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:12.728 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:12.728 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:12.728 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:12.728 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:12.728 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:12.728 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:12.728 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:12.728 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:12.728 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:12.728 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:31:12.728 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:12.728 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:12.728 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:31:12.728 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:12.728 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:12.728 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:12.728 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:12.728 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:12.728 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:12.728 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:12.728 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:12.728 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:12.728 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:12.728 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:31:12.728 01:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:14.628 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:14.628 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:14.628 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:14.628 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:14.629 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:14.629 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:14.629 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:14.629 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:14.629 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:14.629 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:14.629 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:14.629 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:14.629 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:31:14.629 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:14.629 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:14.629 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:14.629 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:14.629 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:14.629 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:14.629 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:14.629 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:14.629 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:14.629 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:14.629 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:14.629 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:14.629 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:14.629 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:14.629 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:14.629 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:14.629 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:14.629 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:14.629 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:14.629 01:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:14.629 01:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:14.629 01:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:14.629 01:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:14.629 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:14.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:31:14.629 00:31:14.629 --- 10.0.0.2 ping statistics --- 00:31:14.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:14.629 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:31:14.629 01:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:14.629 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:14.629 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:31:14.629 00:31:14.629 --- 10.0.0.1 ping statistics --- 00:31:14.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:14.629 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:31:14.629 01:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:14.629 01:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:31:14.629 01:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:14.629 01:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:14.629 01:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:14.629 01:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:14.629 01:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:14.629 01:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:14.629 01:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:14.889 01:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:31:14.889 01:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:14.889 01:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:14.889 01:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:14.889 01:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1950748 00:31:14.889 01:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:14.889 01:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1950748 00:31:14.889 01:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1950748 ']' 00:31:14.889 01:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:14.889 01:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:14.889 01:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:14.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:14.889 01:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:14.889 01:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:14.889 [2024-07-26 01:13:45.113544] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:31:14.889 [2024-07-26 01:13:45.113642] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:14.889 EAL: No free 2048 kB hugepages reported on node 1 00:31:14.889 [2024-07-26 01:13:45.178035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:14.889 [2024-07-26 01:13:45.266926] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:14.889 [2024-07-26 01:13:45.266989] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:14.889 [2024-07-26 01:13:45.267017] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:14.889 [2024-07-26 01:13:45.267028] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:14.889 [2024-07-26 01:13:45.267037] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:14.889 [2024-07-26 01:13:45.267118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:14.889 [2024-07-26 01:13:45.267123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:15.147 01:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:15.147 01:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:31:15.147 01:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:15.147 01:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:15.147 01:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:15.147 01:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:15.147 01:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1950748 00:31:15.147 01:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:15.405 [2024-07-26 01:13:45.662455] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:15.405 01:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:15.663 Malloc0 00:31:15.663 01:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:31:15.921 01:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:16.179 01:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:16.437 [2024-07-26 01:13:46.764028] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:16.437 01:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:16.694 [2024-07-26 01:13:47.040716] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:16.695 01:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1951032 00:31:16.695 01:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:31:16.695 01:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:16.695 01:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1951032 /var/tmp/bdevperf.sock 00:31:16.695 01:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1951032 ']' 00:31:16.695 01:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:16.695 01:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:16.695 01:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:16.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:16.695 01:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:16.695 01:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:16.952 01:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:16.952 01:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:31:16.952 01:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:17.522 01:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:31:17.782 Nvme0n1 00:31:17.782 01:13:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:18.347 Nvme0n1 00:31:18.347 01:13:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:31:18.347 01:13:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:31:20.318 01:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:31:20.318 01:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:20.575 01:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:20.832 01:13:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:31:21.766 01:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:31:21.766 01:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:21.766 01:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:21.766 01:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:22.330 01:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:22.330 01:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:22.330 01:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:22.330 01:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:22.330 01:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:22.330 01:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:22.330 01:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:22.330 01:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:22.587 01:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:22.587 01:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:22.587 01:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:22.587 01:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:22.843 01:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:22.843 01:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:22.843 01:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:22.843 01:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:23.098 01:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:23.098 01:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:23.098 01:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:23.098 01:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:23.355 01:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:23.355 01:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:31:23.355 01:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:23.613 01:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:23.872 01:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:31:25.250 01:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:31:25.250 01:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:25.250 01:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.250 01:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:25.250 01:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:25.250 01:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:25.250 01:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.250 01:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:25.508 01:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:25.508 01:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:25.508 01:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.508 01:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:25.765 01:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:25.765 01:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:25.765 01:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.765 01:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:26.023 01:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:26.023 01:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:26.023 01:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:26.023 01:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:26.280 01:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:26.280 01:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:26.280 01:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:26.280 01:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:26.538 01:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:26.538 01:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:31:26.538 01:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:26.796 01:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:27.054 01:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:31:27.988 01:13:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:31:27.988 01:13:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:27.988 01:13:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:27.988 01:13:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:28.246 01:13:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.246 01:13:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:28.246 01:13:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.246 01:13:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:28.503 01:13:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:28.503 01:13:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:28.503 01:13:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.503 01:13:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:28.761 01:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.761 01:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:28.761 01:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.761 01:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:29.019 01:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:29.019 01:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:29.019 01:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:29.019 01:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:29.276 01:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:29.276 01:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:29.276 01:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:29.276 01:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:29.533 01:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:29.533 01:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:31:29.533 01:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:29.790 01:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:30.048 01:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:31:30.982 01:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:31:30.982 01:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:30.982 01:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.982 01:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:31.239 01:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.239 01:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:31.239 01:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.239 01:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:31.497 01:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:31.497 01:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:31.497 01:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.497 01:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:31.754 01:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.754 01:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:31.754 01:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.754 01:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:32.011 01:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:32.011 01:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:32.011 01:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.011 01:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:32.269 01:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:32.269 01:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:32.269 01:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.269 01:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:32.527 01:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:32.527 01:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:31:32.527 01:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:32.785 01:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:33.043 01:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:31:34.411 01:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:31:34.411 01:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:34.411 01:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.411 01:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:34.411 01:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:34.411 01:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:34.411 01:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.411 01:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:34.668 01:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:34.669 01:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:34.669 01:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.669 01:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:34.926 01:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:34.926 01:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:34.926 01:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.926 01:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:35.184 01:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:35.184 01:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:35.184 01:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.184 01:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:35.442 01:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:35.442 01:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:35.442 01:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.442 01:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:35.700 01:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:35.700 01:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:31:35.700 01:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:35.958 01:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:36.216 01:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:31:37.151 01:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:31:37.151 01:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:37.151 01:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.151 01:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:37.409 01:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:37.409 01:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:37.409 01:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.409 01:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:37.691 01:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:37.691 01:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:37.691 01:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.691 01:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:37.964 01:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:37.964 01:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:37.964 01:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.964 01:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:38.222 01:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:38.222 01:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:38.222 01:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.222 01:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:38.480 01:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:38.480 01:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:38.480 01:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.480 01:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:38.739 01:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:38.739 01:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:31:38.997 01:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:31:38.997 01:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:39.255 01:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:39.255 01:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:31:40.629 01:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:31:40.629 01:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:40.629 01:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.629 01:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:40.629 01:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:40.629 01:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:40.629 01:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.629 01:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:40.887 01:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:40.887 01:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:40.887 01:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.887 01:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:41.145 01:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:41.145 01:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:41.145 01:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.145 01:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:41.403 01:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:41.403 01:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:41.403 01:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.403 01:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:41.664 01:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:41.664 01:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:41.664 01:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.664 01:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:41.922 01:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:41.922 01:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:31:41.922 01:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:42.180 01:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:42.439 01:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:31:43.371 01:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:31:43.371 01:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:43.371 01:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.371 01:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:43.627 01:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:43.627 01:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:43.627 01:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.627 01:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:43.883 01:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:43.883 01:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:43.883 01:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.883 01:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:44.140 01:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:44.140 01:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:44.140 01:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.140 01:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:44.397 01:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:44.397 01:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:44.397 01:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.397 01:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:44.655 01:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:44.655 01:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:44.655 01:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.655 01:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:44.912 01:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:44.912 01:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:31:44.912 01:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:45.170 01:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:45.428 01:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:31:46.361 01:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:31:46.361 01:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:46.361 01:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:46.362 01:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:46.619 01:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:46.619 01:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:46.619 01:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:46.619 01:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:46.876 01:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:46.876 01:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:46.876 01:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:46.876 01:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:47.134 01:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:47.134 01:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:47.134 01:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:47.134 01:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:47.392 01:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:47.392 01:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:47.392 01:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:47.392 01:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:47.650 01:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:47.650 01:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:47.650 01:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:47.650 01:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:47.907 01:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:47.907 01:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:31:47.907 01:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:48.166 01:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:48.424 01:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:31:49.799 01:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:31:49.799 01:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:49.799 01:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.799 01:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:49.799 01:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:49.799 01:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:49.799 01:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.799 01:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:50.057 01:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:50.057 01:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:50.057 01:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.057 01:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:50.315 01:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.315 01:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:50.315 01:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.315 01:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:50.573 01:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.573 01:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:50.573 01:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.573 01:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:50.831 01:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.831 01:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:50.831 01:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.831 01:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:51.090 01:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:51.090 01:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1951032 00:31:51.090 01:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1951032 ']' 00:31:51.090 01:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1951032 00:31:51.090 01:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:31:51.090 01:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:51.090 01:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1951032 00:31:51.090 01:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:31:51.090 01:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:31:51.090 01:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1951032' 00:31:51.090 killing process with pid 1951032 00:31:51.090 01:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1951032 00:31:51.090 01:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1951032 00:31:51.352 Connection closed with partial response: 00:31:51.352 00:31:51.352 00:31:51.352 01:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1951032 00:31:51.352 01:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:51.352 [2024-07-26 01:13:47.098118] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:31:51.352 [2024-07-26 01:13:47.098202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1951032 ] 00:31:51.352 EAL: No free 2048 kB hugepages reported on node 1 00:31:51.352 [2024-07-26 01:13:47.156691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:51.352 [2024-07-26 01:13:47.241086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:51.352 Running I/O for 90 seconds... 00:31:51.352 [2024-07-26 01:14:03.144818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.352 [2024-07-26 01:14:03.144888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:51.352 [2024-07-26 01:14:03.144969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.352 [2024-07-26 01:14:03.145000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:51.352 [2024-07-26 01:14:03.145035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.352 [2024-07-26 01:14:03.145071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:51.352 [2024-07-26 01:14:03.145109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.352 [2024-07-26 01:14:03.145134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:51.352 [2024-07-26 01:14:03.145171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.352 [2024-07-26 01:14:03.145197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:51.352 [2024-07-26 01:14:03.145233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.352 [2024-07-26 01:14:03.145259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:51.352 [2024-07-26 01:14:03.145297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.352 [2024-07-26 01:14:03.145324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:51.352 [2024-07-26 01:14:03.145375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.352 [2024-07-26 01:14:03.145400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:51.352 [2024-07-26 01:14:03.145447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.352 [2024-07-26 01:14:03.145471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:51.352 [2024-07-26 01:14:03.145504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.352 [2024-07-26 01:14:03.145528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:51.352 [2024-07-26 01:14:03.145560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.352 [2024-07-26 01:14:03.145597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:51.352 [2024-07-26 01:14:03.145631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.352 [2024-07-26 01:14:03.145655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:51.352 [2024-07-26 01:14:03.145691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.352 [2024-07-26 01:14:03.145716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:51.352 [2024-07-26 01:14:03.145751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.352 [2024-07-26 01:14:03.145776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:51.352 [2024-07-26 01:14:03.145812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.353 [2024-07-26 01:14:03.145837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:51.353 [2024-07-26 01:14:03.145873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.353 [2024-07-26 01:14:03.145897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:51.353 [2024-07-26 01:14:03.145975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.353 [2024-07-26 01:14:03.146002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:51.353 [2024-07-26 01:14:03.146039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.353 [2024-07-26 01:14:03.146087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:51.353 [2024-07-26 01:14:03.146125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.353 [2024-07-26 01:14:03.146151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:51.353 [2024-07-26 01:14:03.146186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.353 [2024-07-26 01:14:03.146214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:51.353 [2024-07-26 01:14:03.146249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.353 [2024-07-26 01:14:03.146276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:51.353 [2024-07-26 01:14:03.146310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.353 [2024-07-26 01:14:03.146336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:51.353 [2024-07-26 01:14:03.146384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.353 [2024-07-26 01:14:03.146409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:51.353 [2024-07-26 01:14:03.146450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.353 [2024-07-26 01:14:03.146476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:51.353 [2024-07-26 01:14:03.146622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.353 [2024-07-26 01:14:03.146651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:51.353 [2024-07-26 01:14:03.146693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.353 [2024-07-26 01:14:03.146720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:51.353 [2024-07-26 01:14:03.146759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.353 [2024-07-26 01:14:03.146786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:51.353 [2024-07-26 01:14:03.146826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.353 [2024-07-26 01:14:03.146853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:51.353 [2024-07-26 01:14:03.146891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:64784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.353 [2024-07-26 01:14:03.146929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:51.353 [2024-07-26 01:14:03.146965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.353 [2024-07-26 01:14:03.146990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:51.353 [2024-07-26 01:14:03.147042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.353 [2024-07-26 01:14:03.147092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:51.353 [2024-07-26 01:14:03.147157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.353 [2024-07-26 01:14:03.147186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:51.353 [2024-07-26 01:14:03.147224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.353 [2024-07-26 01:14:03.147251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:51.353 [2024-07-26 01:14:03.147292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.353 [2024-07-26 01:14:03.147318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:51.353 [2024-07-26 01:14:03.147359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:64832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.353 [2024-07-26 01:14:03.147386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:51.353 [2024-07-26 01:14:03.147434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:64840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.353 [2024-07-26 01:14:03.147477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:51.353 [2024-07-26 01:14:03.147512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.353 [2024-07-26 01:14:03.147553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:51.353 [2024-07-26 01:14:03.147589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.353 [2024-07-26 01:14:03.147616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:51.353 [2024-07-26 01:14:03.147654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.353 [2024-07-26 01:14:03.147680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:51.353 [2024-07-26 01:14:03.147721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.353 [2024-07-26 01:14:03.147747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:51.353 [2024-07-26 01:14:03.147787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.353 [2024-07-26 01:14:03.147813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:51.353 [2024-07-26 01:14:03.147865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.353 [2024-07-26 01:14:03.147891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:51.353 [2024-07-26 01:14:03.147927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.353 [2024-07-26 01:14:03.147953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:51.353 [2024-07-26 01:14:03.147990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:64904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.353 [2024-07-26 01:14:03.148016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:51.353 [2024-07-26 01:14:03.148074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.353 [2024-07-26 01:14:03.148102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:51.353 [2024-07-26 01:14:03.148142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.353 [2024-07-26 01:14:03.148168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:51.353 [2024-07-26 01:14:03.148209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.353 [2024-07-26 01:14:03.148236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:51.353 [2024-07-26 01:14:03.148280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.353 [2024-07-26 01:14:03.148309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:51.353 [2024-07-26 01:14:03.148361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.353 [2024-07-26 01:14:03.148387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:51.353 [2024-07-26 01:14:03.148424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:64952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.353 [2024-07-26 01:14:03.148449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:51.353 [2024-07-26 01:14:03.148487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:64960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.354 [2024-07-26 01:14:03.148514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:51.354 [2024-07-26 01:14:03.148553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.354 [2024-07-26 01:14:03.148579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:51.354 [2024-07-26 01:14:03.148614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:64976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.354 [2024-07-26 01:14:03.148639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:51.354 [2024-07-26 01:14:03.148675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.354 [2024-07-26 01:14:03.148700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:51.354 [2024-07-26 01:14:03.148738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.354 [2024-07-26 01:14:03.148763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:51.354 [2024-07-26 01:14:03.148803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:65000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.354 [2024-07-26 01:14:03.148828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:51.354 [2024-07-26 01:14:03.148866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:65008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.354 [2024-07-26 01:14:03.148892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:51.354 [2024-07-26 01:14:03.148929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.354 [2024-07-26 01:14:03.148955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:51.354 [2024-07-26 01:14:03.148990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.354 [2024-07-26 01:14:03.149015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:51.354 [2024-07-26 01:14:03.149076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.354 [2024-07-26 01:14:03.149110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:51.354 [2024-07-26 01:14:03.149148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.354 [2024-07-26 01:14:03.149174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:51.354 [2024-07-26 01:14:03.149213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.354 [2024-07-26 01:14:03.149238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:51.354 [2024-07-26 01:14:03.149277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.354 [2024-07-26 01:14:03.149304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:51.354 [2024-07-26 01:14:03.149342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.354 [2024-07-26 01:14:03.149383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:51.354 [2024-07-26 01:14:03.149543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.354 [2024-07-26 01:14:03.149571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:51.354 [2024-07-26 01:14:03.149615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.354 [2024-07-26 01:14:03.149641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:51.354 [2024-07-26 01:14:03.149682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.354 [2024-07-26 01:14:03.149709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:51.354 [2024-07-26 01:14:03.149748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.354 [2024-07-26 01:14:03.149773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:51.354 [2024-07-26 01:14:03.149814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.354 [2024-07-26 01:14:03.149839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:51.354 [2024-07-26 01:14:03.149879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.354 [2024-07-26 01:14:03.149906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:51.354 [2024-07-26 01:14:03.149944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.354 [2024-07-26 01:14:03.149970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:51.354 [2024-07-26 01:14:03.150009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.354 [2024-07-26 01:14:03.150057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:51.354 [2024-07-26 01:14:03.150112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.354 [2024-07-26 01:14:03.150140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:51.354 [2024-07-26 01:14:03.150180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.354 [2024-07-26 01:14:03.150207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:51.354 [2024-07-26 01:14:03.150247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.354 [2024-07-26 01:14:03.150275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:51.354 [2024-07-26 01:14:03.150316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.354 [2024-07-26 01:14:03.150358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:51.354 [2024-07-26 01:14:03.150400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.354 [2024-07-26 01:14:03.150426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:51.354 [2024-07-26 01:14:03.150465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.354 [2024-07-26 01:14:03.150492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:51.354 [2024-07-26 01:14:03.150529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.354 [2024-07-26 01:14:03.150556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:51.354 [2024-07-26 01:14:03.150599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.354 [2024-07-26 01:14:03.150624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:51.354 [2024-07-26 01:14:03.150675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.354 [2024-07-26 01:14:03.150701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:51.354 [2024-07-26 01:14:03.150741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.354 [2024-07-26 01:14:03.150767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:51.354 [2024-07-26 01:14:03.150806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.354 [2024-07-26 01:14:03.150832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:51.354 [2024-07-26 01:14:03.150872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.354 [2024-07-26 01:14:03.150897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:51.354 [2024-07-26 01:14:03.150943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.354 [2024-07-26 01:14:03.150970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:51.354 [2024-07-26 01:14:03.151010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.354 [2024-07-26 01:14:03.151036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:51.354 [2024-07-26 01:14:03.151103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.354 [2024-07-26 01:14:03.151131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:51.355 [2024-07-26 01:14:03.151172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.355 [2024-07-26 01:14:03.151198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:51.355 [2024-07-26 01:14:03.151240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.355 [2024-07-26 01:14:03.151267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:51.355 [2024-07-26 01:14:03.151307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.355 [2024-07-26 01:14:03.151333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:51.355 [2024-07-26 01:14:03.151386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.355 [2024-07-26 01:14:03.151414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:51.355 [2024-07-26 01:14:03.151454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.355 [2024-07-26 01:14:03.151481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:51.355 [2024-07-26 01:14:03.151522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.355 [2024-07-26 01:14:03.151548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:51.355 [2024-07-26 01:14:03.151588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.355 [2024-07-26 01:14:03.151614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:51.355 [2024-07-26 01:14:03.151652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.355 [2024-07-26 01:14:03.151677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:51.355 [2024-07-26 01:14:03.151718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.355 [2024-07-26 01:14:03.151743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:51.355 [2024-07-26 01:14:03.151789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.355 [2024-07-26 01:14:03.151815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:51.355 [2024-07-26 01:14:03.151854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.355 [2024-07-26 01:14:03.151880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:51.355 [2024-07-26 01:14:03.151919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.355 [2024-07-26 01:14:03.151945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:51.355 [2024-07-26 01:14:03.151986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.355 [2024-07-26 01:14:03.152012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:51.355 [2024-07-26 01:14:03.152074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.355 [2024-07-26 01:14:03.152103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:51.355 [2024-07-26 01:14:03.152143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.355 [2024-07-26 01:14:03.152171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:51.355 [2024-07-26 01:14:03.152210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.355 [2024-07-26 01:14:03.152237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:51.355 [2024-07-26 01:14:03.152276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.355 [2024-07-26 01:14:03.152303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:51.355 [2024-07-26 01:14:03.152345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.355 [2024-07-26 01:14:03.152385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:51.355 [2024-07-26 01:14:03.152426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.355 [2024-07-26 01:14:03.152452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:51.355 [2024-07-26 01:14:03.152491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.355 [2024-07-26 01:14:03.152518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:51.355 [2024-07-26 01:14:03.152558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.355 [2024-07-26 01:14:03.152583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:51.355 [2024-07-26 01:14:03.152624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.355 [2024-07-26 01:14:03.152655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:51.355 [2024-07-26 01:14:03.152695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.355 [2024-07-26 01:14:03.152721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:51.355 [2024-07-26 01:14:03.152759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.355 [2024-07-26 01:14:03.152785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:51.355 [2024-07-26 01:14:03.152827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.355 [2024-07-26 01:14:03.152852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:51.355 [2024-07-26 01:14:03.152894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:65016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.355 [2024-07-26 01:14:03.152920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:51.355 [2024-07-26 01:14:03.152959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.355 [2024-07-26 01:14:03.152985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:51.355 [2024-07-26 01:14:03.153024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.355 [2024-07-26 01:14:03.153075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:51.355 [2024-07-26 01:14:03.153122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:65040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.355 [2024-07-26 01:14:03.153148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:51.355 [2024-07-26 01:14:03.153190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:65048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.355 [2024-07-26 01:14:03.153217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:51.355 [2024-07-26 01:14:03.153258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:65056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.355 [2024-07-26 01:14:03.153284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:51.355 [2024-07-26 01:14:03.153325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:65064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.355 [2024-07-26 01:14:03.153366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:51.355 [2024-07-26 01:14:03.153408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:65072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.355 [2024-07-26 01:14:03.153434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:51.355 [2024-07-26 01:14:03.153475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:65080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.355 [2024-07-26 01:14:03.153506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:51.355 [2024-07-26 01:14:03.153546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:65088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.355 [2024-07-26 01:14:03.153572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:51.355 [2024-07-26 01:14:03.153613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.355 [2024-07-26 01:14:03.153638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:51.355 [2024-07-26 01:14:03.153680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.355 [2024-07-26 01:14:03.153706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:51.355 [2024-07-26 01:14:03.153745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:65112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.356 [2024-07-26 01:14:03.153771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:51.356 [2024-07-26 01:14:03.153810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:65120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.356 [2024-07-26 01:14:03.153835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:51.356 [2024-07-26 01:14:03.153876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:65128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.356 [2024-07-26 01:14:03.153900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:51.356 [2024-07-26 01:14:18.815521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:57904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.356 [2024-07-26 01:14:18.815583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:51.356 [2024-07-26 01:14:18.815661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:57920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.356 [2024-07-26 01:14:18.815690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:51.356 [2024-07-26 01:14:18.815727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:57936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.356 [2024-07-26 01:14:18.815754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:51.356 [2024-07-26 01:14:18.815804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:57952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.356 [2024-07-26 01:14:18.815831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:51.356 [2024-07-26 01:14:18.815881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:57968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.356 [2024-07-26 01:14:18.815905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:51.356 [2024-07-26 01:14:18.815941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:57984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.356 [2024-07-26 01:14:18.815967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:51.356 [2024-07-26 01:14:18.816028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:58000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.356 [2024-07-26 01:14:18.816085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:51.356 [2024-07-26 01:14:18.816124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:58016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.356 [2024-07-26 01:14:18.816151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:51.356 [2024-07-26 01:14:18.816186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:58032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.356 [2024-07-26 01:14:18.816220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:51.356 [2024-07-26 01:14:18.816256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:58048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.356 [2024-07-26 01:14:18.816282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:51.356 [2024-07-26 01:14:18.816316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:58064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.356 [2024-07-26 01:14:18.816352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:51.356 [2024-07-26 01:14:18.816401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:58080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.356 [2024-07-26 01:14:18.816429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:51.356 [2024-07-26 01:14:18.816463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.356 [2024-07-26 01:14:18.816490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:51.356 [2024-07-26 01:14:18.816524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:58112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.356 [2024-07-26 01:14:18.816550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:51.356 [2024-07-26 01:14:18.816601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:58128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.356 [2024-07-26 01:14:18.816629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:51.356 [2024-07-26 01:14:18.816665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:58144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.356 [2024-07-26 01:14:18.816693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:51.356 [2024-07-26 01:14:18.820826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:58152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.356 [2024-07-26 01:14:18.820856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:51.356 [2024-07-26 01:14:18.820898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:58168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.356 [2024-07-26 01:14:18.820926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:51.356 [2024-07-26 01:14:18.820970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:58184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.356 [2024-07-26 01:14:18.820996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:51.356 [2024-07-26 01:14:18.821032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:58200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.356 [2024-07-26 01:14:18.821079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:51.356 [2024-07-26 01:14:18.821128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:58216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.356 [2024-07-26 01:14:18.821156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:51.356 [2024-07-26 01:14:18.821191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:58232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.356 [2024-07-26 01:14:18.821219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:51.356 [2024-07-26 01:14:18.821254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:58248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.356 [2024-07-26 01:14:18.821282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:51.356 [2024-07-26 01:14:18.821318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.356 [2024-07-26 01:14:18.821345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:51.356 [2024-07-26 01:14:18.821395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.356 [2024-07-26 01:14:18.821421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:51.356 [2024-07-26 01:14:18.821456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:58296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.356 [2024-07-26 01:14:18.821482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:51.356 [2024-07-26 01:14:18.821517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:58312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.356 [2024-07-26 01:14:18.821542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:51.356 [2024-07-26 01:14:18.821577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:58328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.356 [2024-07-26 01:14:18.821602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:51.356 [2024-07-26 01:14:18.821639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:58344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.356 [2024-07-26 01:14:18.821665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:51.356 [2024-07-26 01:14:18.821701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:58360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.356 [2024-07-26 01:14:18.821726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:51.356 [2024-07-26 01:14:18.821761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:57888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.356 [2024-07-26 01:14:18.821793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:51.356 [2024-07-26 01:14:18.821829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:58376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.356 [2024-07-26 01:14:18.821856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.356 [2024-07-26 01:14:18.821891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:57848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:51.356 [2024-07-26 01:14:18.821917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:51.356 Received shutdown signal, test time was about 32.556126 seconds 00:31:51.356 00:31:51.356 Latency(us) 00:31:51.356 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:51.356 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:51.356 Verification LBA range: start 0x0 length 0x4000 00:31:51.356 Nvme0n1 : 32.56 7947.25 31.04 0.00 0.00 16078.39 373.19 4026531.84 00:31:51.356 =================================================================================================================== 00:31:51.357 Total : 7947.25 31.04 0.00 0.00 16078.39 373.19 4026531.84 00:31:51.357 01:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:51.615 01:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:31:51.615 01:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:51.615 01:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:31:51.615 01:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:51.615 01:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:31:51.615 01:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:51.615 01:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:31:51.615 01:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:51.615 01:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:51.615 rmmod nvme_tcp 00:31:51.615 rmmod nvme_fabrics 00:31:51.615 rmmod nvme_keyring 00:31:51.615 01:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:51.615 01:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:31:51.615 01:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:31:51.615 01:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1950748 ']' 00:31:51.615 01:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1950748 00:31:51.615 01:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1950748 ']' 00:31:51.615 01:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1950748 00:31:51.615 01:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:31:51.615 01:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:51.615 01:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1950748 00:31:51.615 01:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:51.615 01:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:51.615 01:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1950748' 00:31:51.615 killing process with pid 1950748 00:31:51.615 01:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1950748 00:31:51.615 01:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1950748 00:31:51.875 01:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:51.875 01:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:51.875 01:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:51.875 01:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:51.875 01:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:51.875 01:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:51.875 01:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:51.875 01:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:54.409 00:31:54.409 real 0m41.407s 00:31:54.409 user 2m4.991s 00:31:54.409 sys 0m10.712s 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:54.409 ************************************ 00:31:54.409 END TEST nvmf_host_multipath_status 00:31:54.409 ************************************ 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.409 ************************************ 00:31:54.409 START TEST nvmf_discovery_remove_ifc 00:31:54.409 ************************************ 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:54.409 * Looking for test storage... 00:31:54.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:31:54.409 01:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:56.344 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:56.344 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:31:56.344 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:56.344 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:56.344 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:56.344 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:56.344 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:56.344 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:31:56.344 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:56.344 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:31:56.344 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:31:56.344 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:31:56.344 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:31:56.344 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:31:56.344 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:31:56.344 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:56.344 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:56.344 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:56.344 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:56.344 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:56.344 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:56.344 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:56.344 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:56.344 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:56.344 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:56.345 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:56.345 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:56.345 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:56.345 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:56.345 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:56.345 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:31:56.345 00:31:56.345 --- 10.0.0.2 ping statistics --- 00:31:56.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:56.345 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:56.345 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:56.345 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:31:56.345 00:31:56.345 --- 10.0.0.1 ping statistics --- 00:31:56.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:56.345 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1957217 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1957217 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1957217 ']' 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:56.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:56.345 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:56.345 [2024-07-26 01:14:26.561350] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:31:56.346 [2024-07-26 01:14:26.561461] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:56.346 EAL: No free 2048 kB hugepages reported on node 1 00:31:56.346 [2024-07-26 01:14:26.633848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:56.346 [2024-07-26 01:14:26.728614] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:56.346 [2024-07-26 01:14:26.728676] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:56.346 [2024-07-26 01:14:26.728693] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:56.346 [2024-07-26 01:14:26.728706] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:56.346 [2024-07-26 01:14:26.728718] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:56.346 [2024-07-26 01:14:26.728746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:56.604 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:56.604 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:31:56.604 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:56.604 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:56.604 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:56.604 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:56.604 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:31:56.604 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.604 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:56.604 [2024-07-26 01:14:26.884201] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:56.604 [2024-07-26 01:14:26.892429] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:56.604 null0 00:31:56.604 [2024-07-26 01:14:26.924295] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:56.604 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.604 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1957245 00:31:56.604 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:31:56.604 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1957245 /tmp/host.sock 00:31:56.604 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1957245 ']' 00:31:56.604 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:31:56.604 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:56.604 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:56.604 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:56.604 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:56.604 01:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:56.604 [2024-07-26 01:14:26.988480] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:31:56.604 [2024-07-26 01:14:26.988560] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1957245 ] 00:31:56.604 EAL: No free 2048 kB hugepages reported on node 1 00:31:56.862 [2024-07-26 01:14:27.050191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:56.862 [2024-07-26 01:14:27.140774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:56.862 01:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:56.862 01:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:31:56.862 01:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:56.862 01:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:31:56.862 01:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.862 01:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:56.862 01:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.862 01:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:31:56.862 01:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.862 01:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:57.120 01:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.120 01:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:31:57.120 01:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.120 01:14:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:58.051 [2024-07-26 01:14:28.392221] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:58.051 [2024-07-26 01:14:28.392246] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:58.051 [2024-07-26 01:14:28.392268] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:58.051 [2024-07-26 01:14:28.478577] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:58.309 [2024-07-26 01:14:28.704844] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:58.309 [2024-07-26 01:14:28.704912] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:58.309 [2024-07-26 01:14:28.704957] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:58.309 [2024-07-26 01:14:28.704986] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:58.309 [2024-07-26 01:14:28.705014] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:58.309 01:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.309 01:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:31:58.309 01:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:58.309 01:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:58.309 01:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.309 01:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:58.309 01:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:58.309 01:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:58.309 01:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:58.309 01:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.566 01:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:31:58.566 [2024-07-26 01:14:28.751748] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xdae340 was disconnected and freed. delete nvme_qpair. 00:31:58.566 01:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:31:58.566 01:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:31:58.566 01:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:31:58.566 01:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:58.566 01:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:58.566 01:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:58.566 01:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.566 01:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:58.566 01:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:58.566 01:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:58.566 01:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.566 01:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:58.566 01:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:59.498 01:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:59.498 01:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:59.498 01:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.498 01:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:59.498 01:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:59.498 01:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:59.499 01:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:59.499 01:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.499 01:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:59.499 01:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:00.872 01:14:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:00.872 01:14:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:00.872 01:14:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:00.872 01:14:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.872 01:14:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:00.872 01:14:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:00.872 01:14:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:00.872 01:14:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.872 01:14:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:00.872 01:14:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:01.805 01:14:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:01.805 01:14:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:01.805 01:14:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:01.805 01:14:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.805 01:14:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:01.805 01:14:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:01.805 01:14:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:01.805 01:14:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.805 01:14:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:01.805 01:14:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:02.736 01:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:02.736 01:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:02.736 01:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:02.736 01:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.736 01:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:02.736 01:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:02.736 01:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:02.736 01:14:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.736 01:14:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:02.736 01:14:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:03.667 01:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:03.667 01:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:03.667 01:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.667 01:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:03.667 01:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:03.667 01:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:03.667 01:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:03.667 01:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.667 01:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:03.667 01:14:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:03.923 [2024-07-26 01:14:34.146051] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:32:03.923 [2024-07-26 01:14:34.146136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.923 [2024-07-26 01:14:34.146157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.923 [2024-07-26 01:14:34.146173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.923 [2024-07-26 01:14:34.146186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.923 [2024-07-26 01:14:34.146209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.923 [2024-07-26 01:14:34.146223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.923 [2024-07-26 01:14:34.146237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.923 [2024-07-26 01:14:34.146250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.923 [2024-07-26 01:14:34.146263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.923 [2024-07-26 01:14:34.146276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.923 [2024-07-26 01:14:34.146289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd74b60 is same with the state(5) to be set 00:32:03.923 [2024-07-26 01:14:34.156071] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd74b60 (9): Bad file descriptor 00:32:03.923 [2024-07-26 01:14:34.166129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:04.852 01:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:04.852 01:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:04.852 01:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.852 01:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:04.852 01:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:04.852 01:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:04.852 01:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:04.852 [2024-07-26 01:14:35.213086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:32:04.852 [2024-07-26 01:14:35.213132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd74b60 with addr=10.0.0.2, port=4420 00:32:04.852 [2024-07-26 01:14:35.213152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd74b60 is same with the state(5) to be set 00:32:04.852 [2024-07-26 01:14:35.213183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd74b60 (9): Bad file descriptor 00:32:04.852 [2024-07-26 01:14:35.213584] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:04.852 [2024-07-26 01:14:35.213626] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:04.852 [2024-07-26 01:14:35.213646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:04.852 [2024-07-26 01:14:35.213663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:04.852 [2024-07-26 01:14:35.213687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:04.852 [2024-07-26 01:14:35.213706] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:04.852 01:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.852 01:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:04.852 01:14:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:06.224 [2024-07-26 01:14:36.216203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:06.224 [2024-07-26 01:14:36.216234] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:06.224 [2024-07-26 01:14:36.216253] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:06.224 [2024-07-26 01:14:36.216266] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:32:06.224 [2024-07-26 01:14:36.216287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.224 [2024-07-26 01:14:36.216324] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:32:06.224 [2024-07-26 01:14:36.216375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:06.224 [2024-07-26 01:14:36.216395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.224 [2024-07-26 01:14:36.216412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:06.224 [2024-07-26 01:14:36.216425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.224 [2024-07-26 01:14:36.216438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:06.224 [2024-07-26 01:14:36.216450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.224 [2024-07-26 01:14:36.216463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:06.224 [2024-07-26 01:14:36.216476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.224 [2024-07-26 01:14:36.216489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:06.224 [2024-07-26 01:14:36.216501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.224 [2024-07-26 01:14:36.216513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:32:06.224 [2024-07-26 01:14:36.216850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd73f80 (9): Bad file descriptor 00:32:06.224 [2024-07-26 01:14:36.217871] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:32:06.224 [2024-07-26 01:14:36.217896] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:32:06.224 01:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:06.224 01:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:06.224 01:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.224 01:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:06.224 01:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:06.224 01:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:06.224 01:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:06.224 01:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.224 01:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:32:06.224 01:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:06.224 01:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:06.224 01:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:32:06.224 01:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:06.224 01:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:06.224 01:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.224 01:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:06.224 01:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:06.224 01:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:06.224 01:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:06.224 01:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.224 01:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:06.224 01:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:07.156 01:14:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:07.156 01:14:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:07.156 01:14:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:07.156 01:14:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.156 01:14:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:07.156 01:14:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:07.156 01:14:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:07.156 01:14:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.156 01:14:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:07.156 01:14:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:08.090 [2024-07-26 01:14:38.269830] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:08.090 [2024-07-26 01:14:38.269867] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:08.090 [2024-07-26 01:14:38.269889] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:08.090 01:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:08.090 01:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:08.090 01:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:08.090 01:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.090 01:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:08.090 01:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:08.090 01:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:08.090 [2024-07-26 01:14:38.398278] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:32:08.090 01:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.090 01:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:08.090 01:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:08.090 [2024-07-26 01:14:38.459116] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:08.090 [2024-07-26 01:14:38.459163] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:08.090 [2024-07-26 01:14:38.459216] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:08.090 [2024-07-26 01:14:38.459241] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:32:08.090 [2024-07-26 01:14:38.459255] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:08.090 [2024-07-26 01:14:38.466526] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xd877f0 was disconnected and freed. delete nvme_qpair. 00:32:09.024 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:09.024 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:09.024 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:09.024 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.024 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:09.024 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:09.024 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:09.282 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.282 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:32:09.282 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:32:09.282 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1957245 00:32:09.282 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1957245 ']' 00:32:09.282 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1957245 00:32:09.282 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:32:09.282 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:09.282 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1957245 00:32:09.282 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:09.282 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:09.282 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1957245' 00:32:09.282 killing process with pid 1957245 00:32:09.283 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1957245 00:32:09.283 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1957245 00:32:09.540 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:32:09.541 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:09.541 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:32:09.541 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:09.541 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:32:09.541 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:09.541 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:09.541 rmmod nvme_tcp 00:32:09.541 rmmod nvme_fabrics 00:32:09.541 rmmod nvme_keyring 00:32:09.541 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:09.541 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:32:09.541 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:32:09.541 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1957217 ']' 00:32:09.541 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1957217 00:32:09.541 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1957217 ']' 00:32:09.541 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1957217 00:32:09.541 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:32:09.541 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:09.541 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1957217 00:32:09.541 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:09.541 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:09.541 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1957217' 00:32:09.541 killing process with pid 1957217 00:32:09.541 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1957217 00:32:09.541 01:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1957217 00:32:09.799 01:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:09.799 01:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:09.799 01:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:09.799 01:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:09.799 01:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:09.799 01:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:09.799 01:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:09.799 01:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:11.697 01:14:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:11.697 00:32:11.697 real 0m17.702s 00:32:11.697 user 0m25.777s 00:32:11.697 sys 0m3.016s 00:32:11.697 01:14:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:11.697 01:14:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:11.697 ************************************ 00:32:11.697 END TEST nvmf_discovery_remove_ifc 00:32:11.697 ************************************ 00:32:11.697 01:14:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:11.697 01:14:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:11.697 01:14:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:11.697 01:14:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.697 ************************************ 00:32:11.697 START TEST nvmf_identify_kernel_target 00:32:11.697 ************************************ 00:32:11.697 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:11.955 * Looking for test storage... 00:32:11.955 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:11.955 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:11.955 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:32:11.955 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:11.955 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:11.955 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:11.955 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:32:11.956 01:14:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:13.857 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:13.857 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:32:13.857 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:13.857 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:13.857 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:13.857 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:13.857 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:13.857 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:32:13.857 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:13.857 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:32:13.857 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:32:13.857 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:32:13.857 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:32:13.857 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:32:13.857 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:32:13.857 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:13.857 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:13.857 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:13.857 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:13.857 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:13.857 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:13.857 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:13.857 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:13.857 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:13.857 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:13.857 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:13.857 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:13.857 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:13.857 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:13.857 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:13.857 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:13.857 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:13.857 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:13.857 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:13.857 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:13.857 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:13.857 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:13.858 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:13.858 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:13.858 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:13.858 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:13.858 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:32:13.858 00:32:13.858 --- 10.0.0.2 ping statistics --- 00:32:13.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:13.858 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:13.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:13.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:32:13.858 00:32:13.858 --- 10.0.0.1 ping statistics --- 00:32:13.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:13.858 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:13.858 01:14:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:15.231 Waiting for block devices as requested 00:32:15.231 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:15.231 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:15.231 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:15.489 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:15.489 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:15.489 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:15.489 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:15.747 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:15.747 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:15.747 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:15.747 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:16.005 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:16.005 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:16.005 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:16.005 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:16.005 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:16.292 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:16.292 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:16.292 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:16.292 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:16.292 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:16.292 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:16.292 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:16.292 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:16.292 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:16.292 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:16.292 No valid GPT data, bailing 00:32:16.292 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:16.292 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:32:16.292 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:32:16.292 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:16.292 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:16.292 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:16.292 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:16.292 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:16.292 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:16.292 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:32:16.292 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:16.292 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:32:16.292 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:16.292 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:32:16.292 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:32:16.292 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:32:16.292 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:16.292 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:16.552 00:32:16.552 Discovery Log Number of Records 2, Generation counter 2 00:32:16.552 =====Discovery Log Entry 0====== 00:32:16.552 trtype: tcp 00:32:16.552 adrfam: ipv4 00:32:16.552 subtype: current discovery subsystem 00:32:16.552 treq: not specified, sq flow control disable supported 00:32:16.552 portid: 1 00:32:16.552 trsvcid: 4420 00:32:16.552 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:16.552 traddr: 10.0.0.1 00:32:16.552 eflags: none 00:32:16.552 sectype: none 00:32:16.552 =====Discovery Log Entry 1====== 00:32:16.552 trtype: tcp 00:32:16.552 adrfam: ipv4 00:32:16.552 subtype: nvme subsystem 00:32:16.552 treq: not specified, sq flow control disable supported 00:32:16.552 portid: 1 00:32:16.552 trsvcid: 4420 00:32:16.552 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:16.552 traddr: 10.0.0.1 00:32:16.552 eflags: none 00:32:16.552 sectype: none 00:32:16.552 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:32:16.552 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:32:16.552 EAL: No free 2048 kB hugepages reported on node 1 00:32:16.552 ===================================================== 00:32:16.552 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:16.552 ===================================================== 00:32:16.552 Controller Capabilities/Features 00:32:16.552 ================================ 00:32:16.552 Vendor ID: 0000 00:32:16.552 Subsystem Vendor ID: 0000 00:32:16.552 Serial Number: f79c6c5277e180284c40 00:32:16.552 Model Number: Linux 00:32:16.552 Firmware Version: 6.7.0-68 00:32:16.552 Recommended Arb Burst: 0 00:32:16.552 IEEE OUI Identifier: 00 00 00 00:32:16.552 Multi-path I/O 00:32:16.552 May have multiple subsystem ports: No 00:32:16.552 May have multiple controllers: No 00:32:16.552 Associated with SR-IOV VF: No 00:32:16.552 Max Data Transfer Size: Unlimited 00:32:16.552 Max Number of Namespaces: 0 00:32:16.552 Max Number of I/O Queues: 1024 00:32:16.552 NVMe Specification Version (VS): 1.3 00:32:16.552 NVMe Specification Version (Identify): 1.3 00:32:16.552 Maximum Queue Entries: 1024 00:32:16.552 Contiguous Queues Required: No 00:32:16.552 Arbitration Mechanisms Supported 00:32:16.552 Weighted Round Robin: Not Supported 00:32:16.552 Vendor Specific: Not Supported 00:32:16.552 Reset Timeout: 7500 ms 00:32:16.552 Doorbell Stride: 4 bytes 00:32:16.552 NVM Subsystem Reset: Not Supported 00:32:16.552 Command Sets Supported 00:32:16.552 NVM Command Set: Supported 00:32:16.552 Boot Partition: Not Supported 00:32:16.552 Memory Page Size Minimum: 4096 bytes 00:32:16.552 Memory Page Size Maximum: 4096 bytes 00:32:16.552 Persistent Memory Region: Not Supported 00:32:16.552 Optional Asynchronous Events Supported 00:32:16.552 Namespace Attribute Notices: Not Supported 00:32:16.552 Firmware Activation Notices: Not Supported 00:32:16.552 ANA Change Notices: Not Supported 00:32:16.552 PLE Aggregate Log Change Notices: Not Supported 00:32:16.552 LBA Status Info Alert Notices: Not Supported 00:32:16.552 EGE Aggregate Log Change Notices: Not Supported 00:32:16.552 Normal NVM Subsystem Shutdown event: Not Supported 00:32:16.552 Zone Descriptor Change Notices: Not Supported 00:32:16.552 Discovery Log Change Notices: Supported 00:32:16.552 Controller Attributes 00:32:16.552 128-bit Host Identifier: Not Supported 00:32:16.552 Non-Operational Permissive Mode: Not Supported 00:32:16.552 NVM Sets: Not Supported 00:32:16.552 Read Recovery Levels: Not Supported 00:32:16.552 Endurance Groups: Not Supported 00:32:16.552 Predictable Latency Mode: Not Supported 00:32:16.552 Traffic Based Keep ALive: Not Supported 00:32:16.552 Namespace Granularity: Not Supported 00:32:16.552 SQ Associations: Not Supported 00:32:16.552 UUID List: Not Supported 00:32:16.552 Multi-Domain Subsystem: Not Supported 00:32:16.552 Fixed Capacity Management: Not Supported 00:32:16.552 Variable Capacity Management: Not Supported 00:32:16.552 Delete Endurance Group: Not Supported 00:32:16.552 Delete NVM Set: Not Supported 00:32:16.553 Extended LBA Formats Supported: Not Supported 00:32:16.553 Flexible Data Placement Supported: Not Supported 00:32:16.553 00:32:16.553 Controller Memory Buffer Support 00:32:16.553 ================================ 00:32:16.553 Supported: No 00:32:16.553 00:32:16.553 Persistent Memory Region Support 00:32:16.553 ================================ 00:32:16.553 Supported: No 00:32:16.553 00:32:16.553 Admin Command Set Attributes 00:32:16.553 ============================ 00:32:16.553 Security Send/Receive: Not Supported 00:32:16.553 Format NVM: Not Supported 00:32:16.553 Firmware Activate/Download: Not Supported 00:32:16.553 Namespace Management: Not Supported 00:32:16.553 Device Self-Test: Not Supported 00:32:16.553 Directives: Not Supported 00:32:16.553 NVMe-MI: Not Supported 00:32:16.553 Virtualization Management: Not Supported 00:32:16.553 Doorbell Buffer Config: Not Supported 00:32:16.553 Get LBA Status Capability: Not Supported 00:32:16.553 Command & Feature Lockdown Capability: Not Supported 00:32:16.553 Abort Command Limit: 1 00:32:16.553 Async Event Request Limit: 1 00:32:16.553 Number of Firmware Slots: N/A 00:32:16.553 Firmware Slot 1 Read-Only: N/A 00:32:16.553 Firmware Activation Without Reset: N/A 00:32:16.553 Multiple Update Detection Support: N/A 00:32:16.553 Firmware Update Granularity: No Information Provided 00:32:16.553 Per-Namespace SMART Log: No 00:32:16.553 Asymmetric Namespace Access Log Page: Not Supported 00:32:16.553 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:16.553 Command Effects Log Page: Not Supported 00:32:16.553 Get Log Page Extended Data: Supported 00:32:16.553 Telemetry Log Pages: Not Supported 00:32:16.553 Persistent Event Log Pages: Not Supported 00:32:16.553 Supported Log Pages Log Page: May Support 00:32:16.553 Commands Supported & Effects Log Page: Not Supported 00:32:16.553 Feature Identifiers & Effects Log Page:May Support 00:32:16.553 NVMe-MI Commands & Effects Log Page: May Support 00:32:16.553 Data Area 4 for Telemetry Log: Not Supported 00:32:16.553 Error Log Page Entries Supported: 1 00:32:16.553 Keep Alive: Not Supported 00:32:16.553 00:32:16.553 NVM Command Set Attributes 00:32:16.553 ========================== 00:32:16.553 Submission Queue Entry Size 00:32:16.553 Max: 1 00:32:16.553 Min: 1 00:32:16.553 Completion Queue Entry Size 00:32:16.553 Max: 1 00:32:16.553 Min: 1 00:32:16.553 Number of Namespaces: 0 00:32:16.553 Compare Command: Not Supported 00:32:16.553 Write Uncorrectable Command: Not Supported 00:32:16.553 Dataset Management Command: Not Supported 00:32:16.553 Write Zeroes Command: Not Supported 00:32:16.553 Set Features Save Field: Not Supported 00:32:16.553 Reservations: Not Supported 00:32:16.553 Timestamp: Not Supported 00:32:16.553 Copy: Not Supported 00:32:16.553 Volatile Write Cache: Not Present 00:32:16.553 Atomic Write Unit (Normal): 1 00:32:16.553 Atomic Write Unit (PFail): 1 00:32:16.553 Atomic Compare & Write Unit: 1 00:32:16.553 Fused Compare & Write: Not Supported 00:32:16.553 Scatter-Gather List 00:32:16.553 SGL Command Set: Supported 00:32:16.553 SGL Keyed: Not Supported 00:32:16.553 SGL Bit Bucket Descriptor: Not Supported 00:32:16.553 SGL Metadata Pointer: Not Supported 00:32:16.553 Oversized SGL: Not Supported 00:32:16.553 SGL Metadata Address: Not Supported 00:32:16.553 SGL Offset: Supported 00:32:16.553 Transport SGL Data Block: Not Supported 00:32:16.553 Replay Protected Memory Block: Not Supported 00:32:16.553 00:32:16.553 Firmware Slot Information 00:32:16.553 ========================= 00:32:16.553 Active slot: 0 00:32:16.553 00:32:16.553 00:32:16.553 Error Log 00:32:16.553 ========= 00:32:16.553 00:32:16.553 Active Namespaces 00:32:16.553 ================= 00:32:16.553 Discovery Log Page 00:32:16.553 ================== 00:32:16.553 Generation Counter: 2 00:32:16.553 Number of Records: 2 00:32:16.553 Record Format: 0 00:32:16.553 00:32:16.553 Discovery Log Entry 0 00:32:16.553 ---------------------- 00:32:16.553 Transport Type: 3 (TCP) 00:32:16.553 Address Family: 1 (IPv4) 00:32:16.553 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:16.553 Entry Flags: 00:32:16.553 Duplicate Returned Information: 0 00:32:16.553 Explicit Persistent Connection Support for Discovery: 0 00:32:16.553 Transport Requirements: 00:32:16.553 Secure Channel: Not Specified 00:32:16.553 Port ID: 1 (0x0001) 00:32:16.553 Controller ID: 65535 (0xffff) 00:32:16.553 Admin Max SQ Size: 32 00:32:16.553 Transport Service Identifier: 4420 00:32:16.553 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:16.553 Transport Address: 10.0.0.1 00:32:16.553 Discovery Log Entry 1 00:32:16.553 ---------------------- 00:32:16.553 Transport Type: 3 (TCP) 00:32:16.553 Address Family: 1 (IPv4) 00:32:16.553 Subsystem Type: 2 (NVM Subsystem) 00:32:16.553 Entry Flags: 00:32:16.553 Duplicate Returned Information: 0 00:32:16.553 Explicit Persistent Connection Support for Discovery: 0 00:32:16.553 Transport Requirements: 00:32:16.553 Secure Channel: Not Specified 00:32:16.553 Port ID: 1 (0x0001) 00:32:16.553 Controller ID: 65535 (0xffff) 00:32:16.553 Admin Max SQ Size: 32 00:32:16.553 Transport Service Identifier: 4420 00:32:16.553 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:32:16.553 Transport Address: 10.0.0.1 00:32:16.553 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:16.553 EAL: No free 2048 kB hugepages reported on node 1 00:32:16.553 get_feature(0x01) failed 00:32:16.553 get_feature(0x02) failed 00:32:16.553 get_feature(0x04) failed 00:32:16.553 ===================================================== 00:32:16.553 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:16.553 ===================================================== 00:32:16.553 Controller Capabilities/Features 00:32:16.553 ================================ 00:32:16.553 Vendor ID: 0000 00:32:16.553 Subsystem Vendor ID: 0000 00:32:16.553 Serial Number: b7068e8a9a78e5501110 00:32:16.553 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:32:16.553 Firmware Version: 6.7.0-68 00:32:16.553 Recommended Arb Burst: 6 00:32:16.553 IEEE OUI Identifier: 00 00 00 00:32:16.553 Multi-path I/O 00:32:16.553 May have multiple subsystem ports: Yes 00:32:16.553 May have multiple controllers: Yes 00:32:16.553 Associated with SR-IOV VF: No 00:32:16.553 Max Data Transfer Size: Unlimited 00:32:16.553 Max Number of Namespaces: 1024 00:32:16.553 Max Number of I/O Queues: 128 00:32:16.553 NVMe Specification Version (VS): 1.3 00:32:16.553 NVMe Specification Version (Identify): 1.3 00:32:16.553 Maximum Queue Entries: 1024 00:32:16.553 Contiguous Queues Required: No 00:32:16.553 Arbitration Mechanisms Supported 00:32:16.553 Weighted Round Robin: Not Supported 00:32:16.553 Vendor Specific: Not Supported 00:32:16.553 Reset Timeout: 7500 ms 00:32:16.553 Doorbell Stride: 4 bytes 00:32:16.553 NVM Subsystem Reset: Not Supported 00:32:16.553 Command Sets Supported 00:32:16.553 NVM Command Set: Supported 00:32:16.553 Boot Partition: Not Supported 00:32:16.553 Memory Page Size Minimum: 4096 bytes 00:32:16.553 Memory Page Size Maximum: 4096 bytes 00:32:16.553 Persistent Memory Region: Not Supported 00:32:16.553 Optional Asynchronous Events Supported 00:32:16.553 Namespace Attribute Notices: Supported 00:32:16.553 Firmware Activation Notices: Not Supported 00:32:16.553 ANA Change Notices: Supported 00:32:16.553 PLE Aggregate Log Change Notices: Not Supported 00:32:16.553 LBA Status Info Alert Notices: Not Supported 00:32:16.553 EGE Aggregate Log Change Notices: Not Supported 00:32:16.553 Normal NVM Subsystem Shutdown event: Not Supported 00:32:16.553 Zone Descriptor Change Notices: Not Supported 00:32:16.553 Discovery Log Change Notices: Not Supported 00:32:16.553 Controller Attributes 00:32:16.553 128-bit Host Identifier: Supported 00:32:16.553 Non-Operational Permissive Mode: Not Supported 00:32:16.553 NVM Sets: Not Supported 00:32:16.553 Read Recovery Levels: Not Supported 00:32:16.554 Endurance Groups: Not Supported 00:32:16.554 Predictable Latency Mode: Not Supported 00:32:16.554 Traffic Based Keep ALive: Supported 00:32:16.554 Namespace Granularity: Not Supported 00:32:16.554 SQ Associations: Not Supported 00:32:16.554 UUID List: Not Supported 00:32:16.554 Multi-Domain Subsystem: Not Supported 00:32:16.554 Fixed Capacity Management: Not Supported 00:32:16.554 Variable Capacity Management: Not Supported 00:32:16.554 Delete Endurance Group: Not Supported 00:32:16.554 Delete NVM Set: Not Supported 00:32:16.554 Extended LBA Formats Supported: Not Supported 00:32:16.554 Flexible Data Placement Supported: Not Supported 00:32:16.554 00:32:16.554 Controller Memory Buffer Support 00:32:16.554 ================================ 00:32:16.554 Supported: No 00:32:16.554 00:32:16.554 Persistent Memory Region Support 00:32:16.554 ================================ 00:32:16.554 Supported: No 00:32:16.554 00:32:16.554 Admin Command Set Attributes 00:32:16.554 ============================ 00:32:16.554 Security Send/Receive: Not Supported 00:32:16.554 Format NVM: Not Supported 00:32:16.554 Firmware Activate/Download: Not Supported 00:32:16.554 Namespace Management: Not Supported 00:32:16.554 Device Self-Test: Not Supported 00:32:16.554 Directives: Not Supported 00:32:16.554 NVMe-MI: Not Supported 00:32:16.554 Virtualization Management: Not Supported 00:32:16.554 Doorbell Buffer Config: Not Supported 00:32:16.554 Get LBA Status Capability: Not Supported 00:32:16.554 Command & Feature Lockdown Capability: Not Supported 00:32:16.554 Abort Command Limit: 4 00:32:16.554 Async Event Request Limit: 4 00:32:16.554 Number of Firmware Slots: N/A 00:32:16.554 Firmware Slot 1 Read-Only: N/A 00:32:16.554 Firmware Activation Without Reset: N/A 00:32:16.554 Multiple Update Detection Support: N/A 00:32:16.554 Firmware Update Granularity: No Information Provided 00:32:16.554 Per-Namespace SMART Log: Yes 00:32:16.554 Asymmetric Namespace Access Log Page: Supported 00:32:16.554 ANA Transition Time : 10 sec 00:32:16.554 00:32:16.554 Asymmetric Namespace Access Capabilities 00:32:16.554 ANA Optimized State : Supported 00:32:16.554 ANA Non-Optimized State : Supported 00:32:16.554 ANA Inaccessible State : Supported 00:32:16.554 ANA Persistent Loss State : Supported 00:32:16.554 ANA Change State : Supported 00:32:16.554 ANAGRPID is not changed : No 00:32:16.554 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:32:16.554 00:32:16.554 ANA Group Identifier Maximum : 128 00:32:16.554 Number of ANA Group Identifiers : 128 00:32:16.554 Max Number of Allowed Namespaces : 1024 00:32:16.554 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:32:16.554 Command Effects Log Page: Supported 00:32:16.554 Get Log Page Extended Data: Supported 00:32:16.554 Telemetry Log Pages: Not Supported 00:32:16.554 Persistent Event Log Pages: Not Supported 00:32:16.554 Supported Log Pages Log Page: May Support 00:32:16.554 Commands Supported & Effects Log Page: Not Supported 00:32:16.554 Feature Identifiers & Effects Log Page:May Support 00:32:16.554 NVMe-MI Commands & Effects Log Page: May Support 00:32:16.554 Data Area 4 for Telemetry Log: Not Supported 00:32:16.554 Error Log Page Entries Supported: 128 00:32:16.554 Keep Alive: Supported 00:32:16.554 Keep Alive Granularity: 1000 ms 00:32:16.554 00:32:16.554 NVM Command Set Attributes 00:32:16.554 ========================== 00:32:16.554 Submission Queue Entry Size 00:32:16.554 Max: 64 00:32:16.554 Min: 64 00:32:16.554 Completion Queue Entry Size 00:32:16.554 Max: 16 00:32:16.554 Min: 16 00:32:16.554 Number of Namespaces: 1024 00:32:16.554 Compare Command: Not Supported 00:32:16.554 Write Uncorrectable Command: Not Supported 00:32:16.554 Dataset Management Command: Supported 00:32:16.554 Write Zeroes Command: Supported 00:32:16.554 Set Features Save Field: Not Supported 00:32:16.554 Reservations: Not Supported 00:32:16.554 Timestamp: Not Supported 00:32:16.554 Copy: Not Supported 00:32:16.554 Volatile Write Cache: Present 00:32:16.554 Atomic Write Unit (Normal): 1 00:32:16.554 Atomic Write Unit (PFail): 1 00:32:16.554 Atomic Compare & Write Unit: 1 00:32:16.554 Fused Compare & Write: Not Supported 00:32:16.554 Scatter-Gather List 00:32:16.554 SGL Command Set: Supported 00:32:16.554 SGL Keyed: Not Supported 00:32:16.554 SGL Bit Bucket Descriptor: Not Supported 00:32:16.554 SGL Metadata Pointer: Not Supported 00:32:16.554 Oversized SGL: Not Supported 00:32:16.554 SGL Metadata Address: Not Supported 00:32:16.554 SGL Offset: Supported 00:32:16.554 Transport SGL Data Block: Not Supported 00:32:16.554 Replay Protected Memory Block: Not Supported 00:32:16.554 00:32:16.554 Firmware Slot Information 00:32:16.554 ========================= 00:32:16.554 Active slot: 0 00:32:16.554 00:32:16.554 Asymmetric Namespace Access 00:32:16.554 =========================== 00:32:16.554 Change Count : 0 00:32:16.554 Number of ANA Group Descriptors : 1 00:32:16.554 ANA Group Descriptor : 0 00:32:16.554 ANA Group ID : 1 00:32:16.554 Number of NSID Values : 1 00:32:16.554 Change Count : 0 00:32:16.554 ANA State : 1 00:32:16.554 Namespace Identifier : 1 00:32:16.554 00:32:16.554 Commands Supported and Effects 00:32:16.554 ============================== 00:32:16.554 Admin Commands 00:32:16.554 -------------- 00:32:16.554 Get Log Page (02h): Supported 00:32:16.554 Identify (06h): Supported 00:32:16.554 Abort (08h): Supported 00:32:16.554 Set Features (09h): Supported 00:32:16.554 Get Features (0Ah): Supported 00:32:16.554 Asynchronous Event Request (0Ch): Supported 00:32:16.554 Keep Alive (18h): Supported 00:32:16.554 I/O Commands 00:32:16.554 ------------ 00:32:16.554 Flush (00h): Supported 00:32:16.554 Write (01h): Supported LBA-Change 00:32:16.554 Read (02h): Supported 00:32:16.554 Write Zeroes (08h): Supported LBA-Change 00:32:16.554 Dataset Management (09h): Supported 00:32:16.554 00:32:16.554 Error Log 00:32:16.554 ========= 00:32:16.554 Entry: 0 00:32:16.554 Error Count: 0x3 00:32:16.554 Submission Queue Id: 0x0 00:32:16.554 Command Id: 0x5 00:32:16.554 Phase Bit: 0 00:32:16.554 Status Code: 0x2 00:32:16.554 Status Code Type: 0x0 00:32:16.554 Do Not Retry: 1 00:32:16.554 Error Location: 0x28 00:32:16.554 LBA: 0x0 00:32:16.554 Namespace: 0x0 00:32:16.554 Vendor Log Page: 0x0 00:32:16.554 ----------- 00:32:16.554 Entry: 1 00:32:16.554 Error Count: 0x2 00:32:16.554 Submission Queue Id: 0x0 00:32:16.554 Command Id: 0x5 00:32:16.554 Phase Bit: 0 00:32:16.554 Status Code: 0x2 00:32:16.554 Status Code Type: 0x0 00:32:16.554 Do Not Retry: 1 00:32:16.554 Error Location: 0x28 00:32:16.554 LBA: 0x0 00:32:16.554 Namespace: 0x0 00:32:16.554 Vendor Log Page: 0x0 00:32:16.554 ----------- 00:32:16.554 Entry: 2 00:32:16.554 Error Count: 0x1 00:32:16.554 Submission Queue Id: 0x0 00:32:16.554 Command Id: 0x4 00:32:16.554 Phase Bit: 0 00:32:16.554 Status Code: 0x2 00:32:16.554 Status Code Type: 0x0 00:32:16.554 Do Not Retry: 1 00:32:16.554 Error Location: 0x28 00:32:16.554 LBA: 0x0 00:32:16.554 Namespace: 0x0 00:32:16.554 Vendor Log Page: 0x0 00:32:16.554 00:32:16.554 Number of Queues 00:32:16.554 ================ 00:32:16.554 Number of I/O Submission Queues: 128 00:32:16.554 Number of I/O Completion Queues: 128 00:32:16.554 00:32:16.554 ZNS Specific Controller Data 00:32:16.554 ============================ 00:32:16.554 Zone Append Size Limit: 0 00:32:16.554 00:32:16.554 00:32:16.554 Active Namespaces 00:32:16.554 ================= 00:32:16.554 get_feature(0x05) failed 00:32:16.554 Namespace ID:1 00:32:16.554 Command Set Identifier: NVM (00h) 00:32:16.554 Deallocate: Supported 00:32:16.555 Deallocated/Unwritten Error: Not Supported 00:32:16.555 Deallocated Read Value: Unknown 00:32:16.555 Deallocate in Write Zeroes: Not Supported 00:32:16.555 Deallocated Guard Field: 0xFFFF 00:32:16.555 Flush: Supported 00:32:16.555 Reservation: Not Supported 00:32:16.555 Namespace Sharing Capabilities: Multiple Controllers 00:32:16.555 Size (in LBAs): 1953525168 (931GiB) 00:32:16.555 Capacity (in LBAs): 1953525168 (931GiB) 00:32:16.555 Utilization (in LBAs): 1953525168 (931GiB) 00:32:16.555 UUID: a3f546c7-116a-4fbb-8ee8-ae3207495450 00:32:16.555 Thin Provisioning: Not Supported 00:32:16.555 Per-NS Atomic Units: Yes 00:32:16.555 Atomic Boundary Size (Normal): 0 00:32:16.555 Atomic Boundary Size (PFail): 0 00:32:16.555 Atomic Boundary Offset: 0 00:32:16.555 NGUID/EUI64 Never Reused: No 00:32:16.555 ANA group ID: 1 00:32:16.555 Namespace Write Protected: No 00:32:16.555 Number of LBA Formats: 1 00:32:16.555 Current LBA Format: LBA Format #00 00:32:16.555 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:16.555 00:32:16.555 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:32:16.555 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:16.555 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:32:16.555 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:16.555 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:32:16.555 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:16.555 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:16.555 rmmod nvme_tcp 00:32:16.555 rmmod nvme_fabrics 00:32:16.555 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:16.555 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:32:16.555 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:32:16.555 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:32:16.555 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:16.555 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:16.555 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:16.555 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:16.555 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:16.555 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:16.555 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:16.555 01:14:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:19.084 01:14:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:19.084 01:14:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:32:19.084 01:14:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:19.084 01:14:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:32:19.084 01:14:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:19.084 01:14:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:19.084 01:14:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:19.084 01:14:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:19.084 01:14:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:19.084 01:14:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:19.084 01:14:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:20.020 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:20.020 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:20.020 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:20.020 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:20.020 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:20.020 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:20.020 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:20.020 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:20.020 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:20.020 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:20.020 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:20.020 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:20.020 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:20.020 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:20.020 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:20.020 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:20.956 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:20.957 00:32:20.957 real 0m9.217s 00:32:20.957 user 0m1.952s 00:32:20.957 sys 0m3.291s 00:32:20.957 01:14:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:20.957 01:14:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:20.957 ************************************ 00:32:20.957 END TEST nvmf_identify_kernel_target 00:32:20.957 ************************************ 00:32:20.957 01:14:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:20.957 01:14:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:20.957 01:14:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:20.957 01:14:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.957 ************************************ 00:32:20.957 START TEST nvmf_auth_host 00:32:20.957 ************************************ 00:32:20.957 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:21.215 * Looking for test storage... 00:32:21.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:21.215 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:21.215 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:32:21.215 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:21.215 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:21.215 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:21.215 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:21.215 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:21.215 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:21.215 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:21.215 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:21.215 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:21.215 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:21.215 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:21.215 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:21.215 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:21.215 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:21.215 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:21.215 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:21.215 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:21.215 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:21.215 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:21.215 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:21.215 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.215 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.215 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.215 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:32:21.215 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.215 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:32:21.216 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:21.216 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:21.216 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:21.216 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:21.216 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:21.216 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:21.216 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:21.216 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:21.216 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:32:21.216 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:32:21.216 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:32:21.216 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:32:21.216 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:21.216 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:21.216 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:32:21.216 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:32:21.216 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:32:21.216 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:21.216 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:21.216 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:21.216 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:21.216 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:21.216 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:21.216 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:21.216 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:21.216 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:21.216 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:21.216 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:32:21.216 01:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:23.117 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:23.117 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:23.117 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:23.117 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:23.117 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:23.117 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:32:23.117 00:32:23.117 --- 10.0.0.2 ping statistics --- 00:32:23.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:23.117 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:23.117 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:23.117 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:32:23.117 00:32:23.117 --- 10.0.0.1 ping statistics --- 00:32:23.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:23.117 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1964260 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1964260 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1964260 ']' 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:23.117 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.683 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:23.683 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:32:23.683 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:23.683 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:23.683 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.683 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:23.683 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:32:23.683 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:32:23.683 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:23.683 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:23.683 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:23.683 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:23.683 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:23.683 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:23.683 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e9243a9c9342deeebfaec86f620fab82 00:32:23.683 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:23.683 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.5Gf 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e9243a9c9342deeebfaec86f620fab82 0 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e9243a9c9342deeebfaec86f620fab82 0 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e9243a9c9342deeebfaec86f620fab82 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.5Gf 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.5Gf 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.5Gf 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=21148e05aee32dbac1b18bc9f05a27cc27884da4f96c6b1a96c377dfbcef585d 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.2tt 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 21148e05aee32dbac1b18bc9f05a27cc27884da4f96c6b1a96c377dfbcef585d 3 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 21148e05aee32dbac1b18bc9f05a27cc27884da4f96c6b1a96c377dfbcef585d 3 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=21148e05aee32dbac1b18bc9f05a27cc27884da4f96c6b1a96c377dfbcef585d 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.2tt 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.2tt 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.2tt 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=722d544245ec31d30750340d12d64c09ef9e4f33ab1696aa 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.2xJ 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 722d544245ec31d30750340d12d64c09ef9e4f33ab1696aa 0 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 722d544245ec31d30750340d12d64c09ef9e4f33ab1696aa 0 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=722d544245ec31d30750340d12d64c09ef9e4f33ab1696aa 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:23.684 01:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:23.684 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.2xJ 00:32:23.684 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.2xJ 00:32:23.684 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.2xJ 00:32:23.684 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:32:23.684 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:23.684 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:23.684 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:23.684 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:23.684 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:23.684 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:23.684 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=11a6c93c1108606995b181cb6e86f239cf1640d197fedb5a 00:32:23.684 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:23.684 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.PHD 00:32:23.684 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 11a6c93c1108606995b181cb6e86f239cf1640d197fedb5a 2 00:32:23.684 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 11a6c93c1108606995b181cb6e86f239cf1640d197fedb5a 2 00:32:23.684 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:23.684 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:23.684 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=11a6c93c1108606995b181cb6e86f239cf1640d197fedb5a 00:32:23.684 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:23.684 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:23.684 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.PHD 00:32:23.684 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.PHD 00:32:23.684 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.PHD 00:32:23.684 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:23.684 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:23.684 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:23.684 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:23.684 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:23.684 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:23.684 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:23.684 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=93202e6994bb56ee5ae8dd17b5ce6d64 00:32:23.684 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:23.684 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.PlB 00:32:23.684 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 93202e6994bb56ee5ae8dd17b5ce6d64 1 00:32:23.684 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 93202e6994bb56ee5ae8dd17b5ce6d64 1 00:32:23.684 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:23.684 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:23.684 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=93202e6994bb56ee5ae8dd17b5ce6d64 00:32:23.684 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:23.684 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:23.942 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.PlB 00:32:23.942 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.PlB 00:32:23.942 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.PlB 00:32:23.942 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:23.942 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:23.942 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=452a63a0349c2577d75c328e1b4211f3 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.fjf 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 452a63a0349c2577d75c328e1b4211f3 1 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 452a63a0349c2577d75c328e1b4211f3 1 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=452a63a0349c2577d75c328e1b4211f3 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.fjf 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.fjf 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.fjf 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8e421903e1c2a03e692093161a254415f53c50af1a27a5cd 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.vJv 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8e421903e1c2a03e692093161a254415f53c50af1a27a5cd 2 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8e421903e1c2a03e692093161a254415f53c50af1a27a5cd 2 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8e421903e1c2a03e692093161a254415f53c50af1a27a5cd 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.vJv 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.vJv 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.vJv 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ecd707cecde0d46baed6d1d0150d8ff8 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.7xV 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ecd707cecde0d46baed6d1d0150d8ff8 0 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ecd707cecde0d46baed6d1d0150d8ff8 0 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ecd707cecde0d46baed6d1d0150d8ff8 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.7xV 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.7xV 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.7xV 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=24eb8f2d82823ad95137b9821ee2e20e2eb9d61dbf354e6cbbdd7b478073fe6f 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.7u8 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 24eb8f2d82823ad95137b9821ee2e20e2eb9d61dbf354e6cbbdd7b478073fe6f 3 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 24eb8f2d82823ad95137b9821ee2e20e2eb9d61dbf354e6cbbdd7b478073fe6f 3 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=24eb8f2d82823ad95137b9821ee2e20e2eb9d61dbf354e6cbbdd7b478073fe6f 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.7u8 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.7u8 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.7u8 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1964260 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1964260 ']' 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:23.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:23.943 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.202 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:24.202 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:32:24.202 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:24.202 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.5Gf 00:32:24.202 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.202 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.202 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.202 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.2tt ]] 00:32:24.202 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.2tt 00:32:24.202 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.202 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.202 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.202 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:24.202 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.2xJ 00:32:24.202 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.202 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.460 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.460 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.PHD ]] 00:32:24.460 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.PHD 00:32:24.460 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.460 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.460 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.460 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:24.460 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.PlB 00:32:24.460 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.460 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.460 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.460 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.fjf ]] 00:32:24.460 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.fjf 00:32:24.460 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.460 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.460 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.460 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:24.460 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.vJv 00:32:24.460 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.460 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.460 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.460 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.7xV ]] 00:32:24.460 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.7xV 00:32:24.460 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.460 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.460 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.460 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:24.460 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.7u8 00:32:24.460 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.461 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.461 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.461 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:32:24.461 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:32:24.461 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:32:24.461 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:24.461 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:24.461 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:24.461 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.461 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.461 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:24.461 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:24.461 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:24.461 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:24.461 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:24.461 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:32:24.461 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:32:24.461 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:24.461 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:24.461 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:24.461 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:24.461 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:32:24.461 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:24.461 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:24.461 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:24.461 01:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:25.395 Waiting for block devices as requested 00:32:25.395 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:25.653 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:25.653 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:25.911 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:25.911 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:25.911 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:26.168 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:26.169 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:26.169 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:26.169 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:26.426 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:26.426 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:26.426 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:26.426 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:26.684 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:26.684 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:26.684 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:27.250 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:27.250 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:27.250 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:27.251 No valid GPT data, bailing 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:27.251 00:32:27.251 Discovery Log Number of Records 2, Generation counter 2 00:32:27.251 =====Discovery Log Entry 0====== 00:32:27.251 trtype: tcp 00:32:27.251 adrfam: ipv4 00:32:27.251 subtype: current discovery subsystem 00:32:27.251 treq: not specified, sq flow control disable supported 00:32:27.251 portid: 1 00:32:27.251 trsvcid: 4420 00:32:27.251 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:27.251 traddr: 10.0.0.1 00:32:27.251 eflags: none 00:32:27.251 sectype: none 00:32:27.251 =====Discovery Log Entry 1====== 00:32:27.251 trtype: tcp 00:32:27.251 adrfam: ipv4 00:32:27.251 subtype: nvme subsystem 00:32:27.251 treq: not specified, sq flow control disable supported 00:32:27.251 portid: 1 00:32:27.251 trsvcid: 4420 00:32:27.251 subnqn: nqn.2024-02.io.spdk:cnode0 00:32:27.251 traddr: 10.0.0.1 00:32:27.251 eflags: none 00:32:27.251 sectype: none 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyZDU0NDI0NWVjMzFkMzA3NTAzNDBkMTJkNjRjMDllZjllNGYzM2FiMTY5NmFh9SUItg==: 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyZDU0NDI0NWVjMzFkMzA3NTAzNDBkMTJkNjRjMDllZjllNGYzM2FiMTY5NmFh9SUItg==: 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: ]] 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.251 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.510 nvme0n1 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyNDNhOWM5MzQyZGVlZWJmYWVjODZmNjIwZmFiODJJ0+8u: 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyNDNhOWM5MzQyZGVlZWJmYWVjODZmNjIwZmFiODJJ0+8u: 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: ]] 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.510 nvme0n1 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.510 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.768 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.768 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.768 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.768 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.768 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.768 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.768 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:27.768 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.768 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:27.768 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:27.768 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:27.768 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyZDU0NDI0NWVjMzFkMzA3NTAzNDBkMTJkNjRjMDllZjllNGYzM2FiMTY5NmFh9SUItg==: 00:32:27.768 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: 00:32:27.768 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:27.768 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:27.768 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyZDU0NDI0NWVjMzFkMzA3NTAzNDBkMTJkNjRjMDllZjllNGYzM2FiMTY5NmFh9SUItg==: 00:32:27.768 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: ]] 00:32:27.768 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: 00:32:27.768 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:32:27.768 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.768 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:27.768 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:27.768 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:27.768 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.768 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:27.768 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.768 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.768 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.768 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.768 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:27.768 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:27.768 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:27.768 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.769 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.769 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:27.769 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.769 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:27.769 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:27.769 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:27.769 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:27.769 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.769 01:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.769 nvme0n1 00:32:27.769 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.769 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.769 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.769 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.769 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.769 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTMyMDJlNjk5NGJiNTZlZTVhZThkZDE3YjVjZTZkNjRjjo9K: 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTMyMDJlNjk5NGJiNTZlZTVhZThkZDE3YjVjZTZkNjRjjo9K: 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: ]] 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.026 nvme0n1 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGU0MjE5MDNlMWMyYTAzZTY5MjA5MzE2MWEyNTQ0MTVmNTNjNTBhZjFhMjdhNWNkZAf89Q==: 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGU0MjE5MDNlMWMyYTAzZTY5MjA5MzE2MWEyNTQ0MTVmNTNjNTBhZjFhMjdhNWNkZAf89Q==: 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: ]] 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.026 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.284 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.284 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:28.284 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:28.284 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:28.284 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:28.284 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.284 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.284 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:28.284 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:28.284 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:28.284 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:28.284 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:28.284 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:28.284 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.284 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.284 nvme0n1 00:32:28.284 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.284 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.284 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.284 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.284 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:28.284 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.285 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.285 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.285 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.285 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.285 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.285 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:28.285 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:32:28.285 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:28.285 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:28.285 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:28.285 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:28.285 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjRlYjhmMmQ4MjgyM2FkOTUxMzdiOTgyMWVlMmUyMGUyZWI5ZDYxZGJmMzU0ZTZjYmJkZDdiNDc4MDczZmU2ZvVmQYI=: 00:32:28.285 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:28.285 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:28.285 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:28.285 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjRlYjhmMmQ4MjgyM2FkOTUxMzdiOTgyMWVlMmUyMGUyZWI5ZDYxZGJmMzU0ZTZjYmJkZDdiNDc4MDczZmU2ZvVmQYI=: 00:32:28.285 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:28.285 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:32:28.285 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:28.285 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:28.285 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:28.285 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:28.285 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:28.285 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:28.285 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.285 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.285 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.285 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:28.285 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:28.285 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:28.285 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:28.285 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.285 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.285 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:28.285 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:28.285 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:28.285 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:28.285 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:28.285 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:28.285 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.285 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.543 nvme0n1 00:32:28.543 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.543 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:28.543 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.543 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.543 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.543 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.543 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.543 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.543 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.544 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.544 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.544 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:28.544 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:28.544 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:32:28.544 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:28.544 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:28.544 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:28.544 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:28.544 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyNDNhOWM5MzQyZGVlZWJmYWVjODZmNjIwZmFiODJJ0+8u: 00:32:28.544 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: 00:32:28.544 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:28.544 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:28.544 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyNDNhOWM5MzQyZGVlZWJmYWVjODZmNjIwZmFiODJJ0+8u: 00:32:28.544 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: ]] 00:32:28.544 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: 00:32:28.544 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:32:28.544 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:28.544 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:28.544 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:28.544 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:28.544 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:28.544 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:28.544 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.544 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.544 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.544 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:28.544 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:28.544 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:28.544 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:28.544 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.544 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.544 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:28.544 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:28.544 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:28.544 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:28.544 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:28.544 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:28.544 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.544 01:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.802 nvme0n1 00:32:28.802 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.802 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.802 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:28.802 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.802 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.802 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.802 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.802 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.802 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.802 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.802 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.802 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:28.802 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:32:28.802 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:28.802 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:28.802 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:28.802 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:28.802 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyZDU0NDI0NWVjMzFkMzA3NTAzNDBkMTJkNjRjMDllZjllNGYzM2FiMTY5NmFh9SUItg==: 00:32:28.802 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: 00:32:28.802 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:28.802 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:28.802 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyZDU0NDI0NWVjMzFkMzA3NTAzNDBkMTJkNjRjMDllZjllNGYzM2FiMTY5NmFh9SUItg==: 00:32:28.802 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: ]] 00:32:28.802 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: 00:32:28.802 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:32:28.802 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:28.802 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:28.802 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:28.802 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:28.802 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:28.802 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:28.802 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.802 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.802 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.802 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:28.802 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:28.802 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:28.802 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:28.802 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.802 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.802 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:28.803 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:28.803 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:28.803 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:28.803 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:28.803 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:28.803 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.803 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.061 nvme0n1 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTMyMDJlNjk5NGJiNTZlZTVhZThkZDE3YjVjZTZkNjRjjo9K: 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTMyMDJlNjk5NGJiNTZlZTVhZThkZDE3YjVjZTZkNjRjjo9K: 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: ]] 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.061 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.320 nvme0n1 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGU0MjE5MDNlMWMyYTAzZTY5MjA5MzE2MWEyNTQ0MTVmNTNjNTBhZjFhMjdhNWNkZAf89Q==: 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGU0MjE5MDNlMWMyYTAzZTY5MjA5MzE2MWEyNTQ0MTVmNTNjNTBhZjFhMjdhNWNkZAf89Q==: 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: ]] 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.320 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.578 nvme0n1 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjRlYjhmMmQ4MjgyM2FkOTUxMzdiOTgyMWVlMmUyMGUyZWI5ZDYxZGJmMzU0ZTZjYmJkZDdiNDc4MDczZmU2ZvVmQYI=: 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjRlYjhmMmQ4MjgyM2FkOTUxMzdiOTgyMWVlMmUyMGUyZWI5ZDYxZGJmMzU0ZTZjYmJkZDdiNDc4MDczZmU2ZvVmQYI=: 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.578 01:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.836 nvme0n1 00:32:29.836 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.836 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.836 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.836 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.836 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.836 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.836 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.836 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.836 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.836 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.836 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.836 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:29.836 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:29.836 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:29.836 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.836 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:29.836 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:29.836 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:29.836 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyNDNhOWM5MzQyZGVlZWJmYWVjODZmNjIwZmFiODJJ0+8u: 00:32:29.836 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: 00:32:29.836 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:29.836 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:29.836 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyNDNhOWM5MzQyZGVlZWJmYWVjODZmNjIwZmFiODJJ0+8u: 00:32:29.836 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: ]] 00:32:29.836 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: 00:32:29.836 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:32:29.836 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:29.836 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:29.836 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:29.836 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:29.836 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.836 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:29.836 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.836 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.836 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.836 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:29.836 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:29.836 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:29.836 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:29.836 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.836 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.836 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:29.837 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.837 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:29.837 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:29.837 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:29.837 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:29.837 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.837 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.401 nvme0n1 00:32:30.401 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.401 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.401 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.401 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:30.401 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.401 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.401 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.401 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.401 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.401 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.401 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.401 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:30.401 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:30.401 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.401 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:30.401 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:30.401 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:30.401 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyZDU0NDI0NWVjMzFkMzA3NTAzNDBkMTJkNjRjMDllZjllNGYzM2FiMTY5NmFh9SUItg==: 00:32:30.401 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: 00:32:30.401 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:30.401 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:30.401 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyZDU0NDI0NWVjMzFkMzA3NTAzNDBkMTJkNjRjMDllZjllNGYzM2FiMTY5NmFh9SUItg==: 00:32:30.401 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: ]] 00:32:30.401 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: 00:32:30.401 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:32:30.401 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:30.402 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:30.402 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:30.402 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:30.402 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.402 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:30.402 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.402 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.402 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.402 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:30.402 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:30.402 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:30.402 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:30.402 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.402 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.402 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:30.402 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:30.402 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:30.402 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:30.402 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:30.402 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:30.402 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.402 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.662 nvme0n1 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTMyMDJlNjk5NGJiNTZlZTVhZThkZDE3YjVjZTZkNjRjjo9K: 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTMyMDJlNjk5NGJiNTZlZTVhZThkZDE3YjVjZTZkNjRjjo9K: 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: ]] 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.662 01:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.919 nvme0n1 00:32:30.919 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.919 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.919 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.919 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:30.919 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGU0MjE5MDNlMWMyYTAzZTY5MjA5MzE2MWEyNTQ0MTVmNTNjNTBhZjFhMjdhNWNkZAf89Q==: 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGU0MjE5MDNlMWMyYTAzZTY5MjA5MzE2MWEyNTQ0MTVmNTNjNTBhZjFhMjdhNWNkZAf89Q==: 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: ]] 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.920 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.178 nvme0n1 00:32:31.178 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.178 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.178 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.178 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.178 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.436 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.436 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.436 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.436 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.436 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.436 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.436 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.436 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:32:31.436 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.436 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:31.436 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:31.436 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:31.436 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjRlYjhmMmQ4MjgyM2FkOTUxMzdiOTgyMWVlMmUyMGUyZWI5ZDYxZGJmMzU0ZTZjYmJkZDdiNDc4MDczZmU2ZvVmQYI=: 00:32:31.436 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:31.436 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:31.436 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:31.436 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjRlYjhmMmQ4MjgyM2FkOTUxMzdiOTgyMWVlMmUyMGUyZWI5ZDYxZGJmMzU0ZTZjYmJkZDdiNDc4MDczZmU2ZvVmQYI=: 00:32:31.436 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:31.436 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:32:31.436 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.436 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:31.436 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:31.436 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:31.437 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.437 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:31.437 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.437 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.437 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.437 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.437 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:31.437 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:31.437 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:31.437 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.437 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.437 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:31.437 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.437 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:31.437 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:31.437 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:31.437 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:31.437 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.437 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.695 nvme0n1 00:32:31.695 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.695 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.695 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.695 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.695 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.695 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.695 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.695 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.695 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.695 01:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.695 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.695 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:31.695 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.695 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:32:31.696 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.696 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:31.696 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:31.696 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:31.696 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyNDNhOWM5MzQyZGVlZWJmYWVjODZmNjIwZmFiODJJ0+8u: 00:32:31.696 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: 00:32:31.696 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:31.696 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:31.696 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyNDNhOWM5MzQyZGVlZWJmYWVjODZmNjIwZmFiODJJ0+8u: 00:32:31.696 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: ]] 00:32:31.696 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: 00:32:31.696 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:32:31.696 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.696 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:31.696 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:31.696 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:31.696 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.696 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:31.696 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.696 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.696 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.696 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.696 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:31.696 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:31.696 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:31.696 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.696 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.696 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:31.696 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.696 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:31.696 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:31.696 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:31.696 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:31.696 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.696 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.261 nvme0n1 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyZDU0NDI0NWVjMzFkMzA3NTAzNDBkMTJkNjRjMDllZjllNGYzM2FiMTY5NmFh9SUItg==: 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyZDU0NDI0NWVjMzFkMzA3NTAzNDBkMTJkNjRjMDllZjllNGYzM2FiMTY5NmFh9SUItg==: 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: ]] 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.261 01:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.827 nvme0n1 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTMyMDJlNjk5NGJiNTZlZTVhZThkZDE3YjVjZTZkNjRjjo9K: 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTMyMDJlNjk5NGJiNTZlZTVhZThkZDE3YjVjZTZkNjRjjo9K: 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: ]] 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.827 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.828 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:32.828 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.828 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.392 nvme0n1 00:32:33.392 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.392 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.392 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.392 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.392 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.392 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.392 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.392 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.392 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.392 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.392 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.392 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.392 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:32:33.392 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.392 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:33.392 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:33.392 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:33.392 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGU0MjE5MDNlMWMyYTAzZTY5MjA5MzE2MWEyNTQ0MTVmNTNjNTBhZjFhMjdhNWNkZAf89Q==: 00:32:33.392 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: 00:32:33.392 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:33.392 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:33.392 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGU0MjE5MDNlMWMyYTAzZTY5MjA5MzE2MWEyNTQ0MTVmNTNjNTBhZjFhMjdhNWNkZAf89Q==: 00:32:33.392 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: ]] 00:32:33.392 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: 00:32:33.392 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:32:33.392 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.392 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:33.392 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:33.392 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:33.392 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.393 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:33.393 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.393 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.393 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.393 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.393 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:33.393 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:33.393 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:33.393 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.393 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.393 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:33.393 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.393 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:33.393 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:33.393 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:33.393 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:33.393 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.393 01:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.959 nvme0n1 00:32:33.959 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.959 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.959 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.959 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.959 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.959 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.959 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.959 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.960 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.960 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.237 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.237 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.237 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:32:34.237 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.237 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:34.237 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:34.237 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:34.237 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjRlYjhmMmQ4MjgyM2FkOTUxMzdiOTgyMWVlMmUyMGUyZWI5ZDYxZGJmMzU0ZTZjYmJkZDdiNDc4MDczZmU2ZvVmQYI=: 00:32:34.237 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:34.237 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:34.237 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:34.237 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjRlYjhmMmQ4MjgyM2FkOTUxMzdiOTgyMWVlMmUyMGUyZWI5ZDYxZGJmMzU0ZTZjYmJkZDdiNDc4MDczZmU2ZvVmQYI=: 00:32:34.237 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:34.237 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:32:34.237 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.237 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:34.237 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:34.237 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:34.237 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.237 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:34.237 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.237 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.237 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.237 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.237 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.237 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.237 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.237 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.237 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.237 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.237 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.237 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.237 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.237 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.237 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:34.237 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.237 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.810 nvme0n1 00:32:34.810 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.810 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.810 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.810 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.810 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.810 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.810 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.810 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.810 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.810 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.810 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.810 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:34.810 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.810 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:32:34.810 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.810 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:34.810 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:34.810 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:34.810 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyNDNhOWM5MzQyZGVlZWJmYWVjODZmNjIwZmFiODJJ0+8u: 00:32:34.810 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: 00:32:34.810 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:34.810 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:34.810 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyNDNhOWM5MzQyZGVlZWJmYWVjODZmNjIwZmFiODJJ0+8u: 00:32:34.811 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: ]] 00:32:34.811 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: 00:32:34.811 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:32:34.811 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.811 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:34.811 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:34.811 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:34.811 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.811 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:34.811 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.811 01:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.811 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.811 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.811 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.811 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.811 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.811 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.811 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.811 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.811 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.811 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.811 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.811 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.811 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:34.811 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.811 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.744 nvme0n1 00:32:35.744 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.744 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.744 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.744 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.744 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.744 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.744 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.744 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.744 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.744 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.744 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.744 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.744 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:32:35.744 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.744 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:35.744 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:35.744 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:35.744 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyZDU0NDI0NWVjMzFkMzA3NTAzNDBkMTJkNjRjMDllZjllNGYzM2FiMTY5NmFh9SUItg==: 00:32:35.744 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: 00:32:35.744 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:35.744 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:35.744 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyZDU0NDI0NWVjMzFkMzA3NTAzNDBkMTJkNjRjMDllZjllNGYzM2FiMTY5NmFh9SUItg==: 00:32:35.744 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: ]] 00:32:35.744 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: 00:32:35.745 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:32:35.745 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.745 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:35.745 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:35.745 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:35.745 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.745 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:35.745 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.745 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.745 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.745 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.745 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.745 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.745 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.745 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.745 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.745 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.745 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.745 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.745 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.745 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.745 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:35.745 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.745 01:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.678 nvme0n1 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTMyMDJlNjk5NGJiNTZlZTVhZThkZDE3YjVjZTZkNjRjjo9K: 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTMyMDJlNjk5NGJiNTZlZTVhZThkZDE3YjVjZTZkNjRjjo9K: 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: ]] 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.678 01:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.611 nvme0n1 00:32:37.611 01:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.611 01:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.611 01:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.611 01:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.611 01:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.611 01:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.611 01:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.611 01:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.611 01:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.612 01:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.612 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.612 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.612 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:32:37.612 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.612 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:37.612 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:37.612 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:37.612 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGU0MjE5MDNlMWMyYTAzZTY5MjA5MzE2MWEyNTQ0MTVmNTNjNTBhZjFhMjdhNWNkZAf89Q==: 00:32:37.612 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: 00:32:37.612 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:37.612 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:37.612 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGU0MjE5MDNlMWMyYTAzZTY5MjA5MzE2MWEyNTQ0MTVmNTNjNTBhZjFhMjdhNWNkZAf89Q==: 00:32:37.612 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: ]] 00:32:37.612 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: 00:32:37.612 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:32:37.612 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.612 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:37.612 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:37.612 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:37.612 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.612 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:37.612 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.612 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.612 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.612 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.612 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.612 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.612 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.612 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.612 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.612 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.612 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.612 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.612 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.612 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.612 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:37.612 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.612 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.545 nvme0n1 00:32:38.545 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.802 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.802 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.802 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.802 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.802 01:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.802 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.802 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.802 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.802 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.802 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.802 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.802 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:32:38.802 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.802 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:38.802 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:38.802 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:38.802 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjRlYjhmMmQ4MjgyM2FkOTUxMzdiOTgyMWVlMmUyMGUyZWI5ZDYxZGJmMzU0ZTZjYmJkZDdiNDc4MDczZmU2ZvVmQYI=: 00:32:38.802 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:38.802 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:38.802 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:38.802 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjRlYjhmMmQ4MjgyM2FkOTUxMzdiOTgyMWVlMmUyMGUyZWI5ZDYxZGJmMzU0ZTZjYmJkZDdiNDc4MDczZmU2ZvVmQYI=: 00:32:38.802 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:38.802 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:32:38.802 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.802 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:38.802 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:38.802 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:38.802 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.802 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:38.802 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.802 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.802 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.803 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.803 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.803 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.803 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.803 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.803 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.803 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.803 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.803 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.803 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.803 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.803 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:38.803 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.803 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.737 nvme0n1 00:32:39.737 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.737 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.737 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.737 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.737 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.737 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.737 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.737 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.737 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.737 01:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.737 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.737 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:39.737 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:39.737 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.737 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:32:39.737 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.737 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:39.737 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:39.737 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:39.737 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyNDNhOWM5MzQyZGVlZWJmYWVjODZmNjIwZmFiODJJ0+8u: 00:32:39.737 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: 00:32:39.737 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:39.737 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:39.737 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyNDNhOWM5MzQyZGVlZWJmYWVjODZmNjIwZmFiODJJ0+8u: 00:32:39.737 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: ]] 00:32:39.737 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: 00:32:39.737 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:32:39.737 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.737 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:39.737 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:39.737 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:39.737 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.737 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:39.737 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.737 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.737 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.737 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.737 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:39.737 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:39.737 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:39.737 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.737 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.737 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:39.737 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.737 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:39.737 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:39.737 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:39.737 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:39.737 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.737 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.996 nvme0n1 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyZDU0NDI0NWVjMzFkMzA3NTAzNDBkMTJkNjRjMDllZjllNGYzM2FiMTY5NmFh9SUItg==: 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyZDU0NDI0NWVjMzFkMzA3NTAzNDBkMTJkNjRjMDllZjllNGYzM2FiMTY5NmFh9SUItg==: 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: ]] 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.996 nvme0n1 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.996 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.254 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTMyMDJlNjk5NGJiNTZlZTVhZThkZDE3YjVjZTZkNjRjjo9K: 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTMyMDJlNjk5NGJiNTZlZTVhZThkZDE3YjVjZTZkNjRjjo9K: 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: ]] 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.255 nvme0n1 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.255 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.513 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.513 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:40.513 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:32:40.513 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGU0MjE5MDNlMWMyYTAzZTY5MjA5MzE2MWEyNTQ0MTVmNTNjNTBhZjFhMjdhNWNkZAf89Q==: 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGU0MjE5MDNlMWMyYTAzZTY5MjA5MzE2MWEyNTQ0MTVmNTNjNTBhZjFhMjdhNWNkZAf89Q==: 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: ]] 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.514 nvme0n1 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjRlYjhmMmQ4MjgyM2FkOTUxMzdiOTgyMWVlMmUyMGUyZWI5ZDYxZGJmMzU0ZTZjYmJkZDdiNDc4MDczZmU2ZvVmQYI=: 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjRlYjhmMmQ4MjgyM2FkOTUxMzdiOTgyMWVlMmUyMGUyZWI5ZDYxZGJmMzU0ZTZjYmJkZDdiNDc4MDczZmU2ZvVmQYI=: 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.514 01:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.772 nvme0n1 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyNDNhOWM5MzQyZGVlZWJmYWVjODZmNjIwZmFiODJJ0+8u: 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyNDNhOWM5MzQyZGVlZWJmYWVjODZmNjIwZmFiODJJ0+8u: 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: ]] 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.772 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.029 nvme0n1 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyZDU0NDI0NWVjMzFkMzA3NTAzNDBkMTJkNjRjMDllZjllNGYzM2FiMTY5NmFh9SUItg==: 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyZDU0NDI0NWVjMzFkMzA3NTAzNDBkMTJkNjRjMDllZjllNGYzM2FiMTY5NmFh9SUItg==: 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: ]] 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:41.029 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:41.030 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:41.030 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:41.030 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.030 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.287 nvme0n1 00:32:41.287 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.287 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTMyMDJlNjk5NGJiNTZlZTVhZThkZDE3YjVjZTZkNjRjjo9K: 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTMyMDJlNjk5NGJiNTZlZTVhZThkZDE3YjVjZTZkNjRjjo9K: 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: ]] 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.288 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.546 nvme0n1 00:32:41.546 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.546 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.546 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.546 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.546 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.546 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.546 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.546 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.546 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.546 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.546 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.546 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.546 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:32:41.546 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.546 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:41.546 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:41.546 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:41.546 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGU0MjE5MDNlMWMyYTAzZTY5MjA5MzE2MWEyNTQ0MTVmNTNjNTBhZjFhMjdhNWNkZAf89Q==: 00:32:41.546 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: 00:32:41.546 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:41.546 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:41.546 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGU0MjE5MDNlMWMyYTAzZTY5MjA5MzE2MWEyNTQ0MTVmNTNjNTBhZjFhMjdhNWNkZAf89Q==: 00:32:41.546 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: ]] 00:32:41.546 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: 00:32:41.547 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:32:41.547 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.547 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:41.547 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:41.547 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:41.547 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.547 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:41.547 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.547 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.547 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.547 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.547 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:41.547 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:41.547 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:41.547 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.547 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.547 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:41.547 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.547 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:41.547 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:41.547 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:41.547 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:41.547 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.547 01:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.805 nvme0n1 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjRlYjhmMmQ4MjgyM2FkOTUxMzdiOTgyMWVlMmUyMGUyZWI5ZDYxZGJmMzU0ZTZjYmJkZDdiNDc4MDczZmU2ZvVmQYI=: 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjRlYjhmMmQ4MjgyM2FkOTUxMzdiOTgyMWVlMmUyMGUyZWI5ZDYxZGJmMzU0ZTZjYmJkZDdiNDc4MDczZmU2ZvVmQYI=: 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.805 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.063 nvme0n1 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyNDNhOWM5MzQyZGVlZWJmYWVjODZmNjIwZmFiODJJ0+8u: 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyNDNhOWM5MzQyZGVlZWJmYWVjODZmNjIwZmFiODJJ0+8u: 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: ]] 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.063 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.320 nvme0n1 00:32:42.320 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.320 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.578 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.578 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.578 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.578 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.578 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.578 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.578 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.578 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.578 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.578 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.578 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:32:42.578 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.578 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:42.578 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:42.578 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:42.578 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyZDU0NDI0NWVjMzFkMzA3NTAzNDBkMTJkNjRjMDllZjllNGYzM2FiMTY5NmFh9SUItg==: 00:32:42.578 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: 00:32:42.578 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:42.578 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:42.578 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyZDU0NDI0NWVjMzFkMzA3NTAzNDBkMTJkNjRjMDllZjllNGYzM2FiMTY5NmFh9SUItg==: 00:32:42.578 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: ]] 00:32:42.578 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: 00:32:42.578 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:32:42.578 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.578 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:42.578 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:42.578 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:42.578 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.578 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:42.578 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.578 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.578 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.579 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.579 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:42.579 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:42.579 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:42.579 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.579 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.579 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:42.579 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.579 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:42.579 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:42.579 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:42.579 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:42.579 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.579 01:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.836 nvme0n1 00:32:42.836 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.836 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.836 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.836 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.836 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.836 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.836 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.836 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.836 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.836 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.836 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.836 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.836 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:32:42.836 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.836 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:42.836 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:42.836 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:42.836 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTMyMDJlNjk5NGJiNTZlZTVhZThkZDE3YjVjZTZkNjRjjo9K: 00:32:42.836 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: 00:32:42.836 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:42.836 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:42.836 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTMyMDJlNjk5NGJiNTZlZTVhZThkZDE3YjVjZTZkNjRjjo9K: 00:32:42.836 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: ]] 00:32:42.837 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: 00:32:42.837 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:32:42.837 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.837 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:42.837 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:42.837 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:42.837 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.837 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:42.837 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.837 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.837 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.837 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.837 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:42.837 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:42.837 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:42.837 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.837 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.837 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:42.837 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.837 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:42.837 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:42.837 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:42.837 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:42.837 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.837 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.106 nvme0n1 00:32:43.106 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.106 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.106 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.106 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.106 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.106 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.106 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.106 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.106 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.106 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.106 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.106 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.106 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:32:43.106 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.106 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:43.106 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:43.106 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:43.106 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGU0MjE5MDNlMWMyYTAzZTY5MjA5MzE2MWEyNTQ0MTVmNTNjNTBhZjFhMjdhNWNkZAf89Q==: 00:32:43.106 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: 00:32:43.106 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:43.106 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:43.106 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGU0MjE5MDNlMWMyYTAzZTY5MjA5MzE2MWEyNTQ0MTVmNTNjNTBhZjFhMjdhNWNkZAf89Q==: 00:32:43.106 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: ]] 00:32:43.106 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: 00:32:43.106 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:32:43.106 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.106 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:43.106 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:43.107 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:43.107 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.107 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:43.107 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.107 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.365 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.365 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.365 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.365 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.365 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.365 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.365 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.365 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.365 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.365 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.365 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.365 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.365 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:43.365 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.365 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.622 nvme0n1 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjRlYjhmMmQ4MjgyM2FkOTUxMzdiOTgyMWVlMmUyMGUyZWI5ZDYxZGJmMzU0ZTZjYmJkZDdiNDc4MDczZmU2ZvVmQYI=: 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjRlYjhmMmQ4MjgyM2FkOTUxMzdiOTgyMWVlMmUyMGUyZWI5ZDYxZGJmMzU0ZTZjYmJkZDdiNDc4MDczZmU2ZvVmQYI=: 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.622 01:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.879 nvme0n1 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyNDNhOWM5MzQyZGVlZWJmYWVjODZmNjIwZmFiODJJ0+8u: 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyNDNhOWM5MzQyZGVlZWJmYWVjODZmNjIwZmFiODJJ0+8u: 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: ]] 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.879 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.443 nvme0n1 00:32:44.443 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.443 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.443 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.443 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.443 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.443 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.443 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.443 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.443 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.443 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.443 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.443 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.443 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:32:44.443 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.443 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:44.443 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:44.443 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:44.443 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyZDU0NDI0NWVjMzFkMzA3NTAzNDBkMTJkNjRjMDllZjllNGYzM2FiMTY5NmFh9SUItg==: 00:32:44.443 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: 00:32:44.443 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:44.443 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:44.443 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyZDU0NDI0NWVjMzFkMzA3NTAzNDBkMTJkNjRjMDllZjllNGYzM2FiMTY5NmFh9SUItg==: 00:32:44.443 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: ]] 00:32:44.443 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: 00:32:44.443 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:32:44.443 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.443 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:44.443 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:44.443 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:44.443 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.443 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:44.443 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.443 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.700 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.700 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.700 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:44.700 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:44.700 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:44.700 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.700 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.700 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:44.700 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.700 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:44.700 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:44.700 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:44.700 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:44.700 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.700 01:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.262 nvme0n1 00:32:45.262 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.262 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.262 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.262 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.262 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.262 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.262 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.262 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.262 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.262 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.262 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.262 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.262 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:32:45.262 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.262 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:45.262 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:45.262 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:45.263 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTMyMDJlNjk5NGJiNTZlZTVhZThkZDE3YjVjZTZkNjRjjo9K: 00:32:45.263 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: 00:32:45.263 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:45.263 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:45.263 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTMyMDJlNjk5NGJiNTZlZTVhZThkZDE3YjVjZTZkNjRjjo9K: 00:32:45.263 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: ]] 00:32:45.263 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: 00:32:45.263 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:32:45.263 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.263 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:45.263 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:45.263 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:45.263 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.263 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:45.263 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.263 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.263 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.263 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.263 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:45.263 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:45.263 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:45.263 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.263 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.263 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:45.263 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.263 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:45.263 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:45.263 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:45.263 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:45.263 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.263 01:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.828 nvme0n1 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGU0MjE5MDNlMWMyYTAzZTY5MjA5MzE2MWEyNTQ0MTVmNTNjNTBhZjFhMjdhNWNkZAf89Q==: 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGU0MjE5MDNlMWMyYTAzZTY5MjA5MzE2MWEyNTQ0MTVmNTNjNTBhZjFhMjdhNWNkZAf89Q==: 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: ]] 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.828 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.394 nvme0n1 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjRlYjhmMmQ4MjgyM2FkOTUxMzdiOTgyMWVlMmUyMGUyZWI5ZDYxZGJmMzU0ZTZjYmJkZDdiNDc4MDczZmU2ZvVmQYI=: 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjRlYjhmMmQ4MjgyM2FkOTUxMzdiOTgyMWVlMmUyMGUyZWI5ZDYxZGJmMzU0ZTZjYmJkZDdiNDc4MDczZmU2ZvVmQYI=: 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.394 01:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.960 nvme0n1 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyNDNhOWM5MzQyZGVlZWJmYWVjODZmNjIwZmFiODJJ0+8u: 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyNDNhOWM5MzQyZGVlZWJmYWVjODZmNjIwZmFiODJJ0+8u: 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: ]] 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.960 01:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.891 nvme0n1 00:32:47.892 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.892 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.892 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.892 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.892 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyZDU0NDI0NWVjMzFkMzA3NTAzNDBkMTJkNjRjMDllZjllNGYzM2FiMTY5NmFh9SUItg==: 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyZDU0NDI0NWVjMzFkMzA3NTAzNDBkMTJkNjRjMDllZjllNGYzM2FiMTY5NmFh9SUItg==: 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: ]] 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.149 01:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.080 nvme0n1 00:32:49.080 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.080 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.080 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.080 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.080 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.080 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.080 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.080 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.080 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.080 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.080 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.080 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.080 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:32:49.080 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.080 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:49.080 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:49.080 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:49.080 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTMyMDJlNjk5NGJiNTZlZTVhZThkZDE3YjVjZTZkNjRjjo9K: 00:32:49.080 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: 00:32:49.080 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:49.080 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:49.080 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTMyMDJlNjk5NGJiNTZlZTVhZThkZDE3YjVjZTZkNjRjjo9K: 00:32:49.080 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: ]] 00:32:49.080 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: 00:32:49.080 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:32:49.080 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.080 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:49.080 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:49.080 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:49.080 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.081 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:49.081 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.081 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.081 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.081 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.081 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.081 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.081 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.081 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.081 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.081 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.081 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.081 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.081 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.081 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.081 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:49.081 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.081 01:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.012 nvme0n1 00:32:50.012 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.012 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.012 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.012 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.012 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.012 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.012 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.012 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.012 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.012 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.012 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.012 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.012 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:32:50.012 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.012 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:50.012 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:50.012 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:50.012 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGU0MjE5MDNlMWMyYTAzZTY5MjA5MzE2MWEyNTQ0MTVmNTNjNTBhZjFhMjdhNWNkZAf89Q==: 00:32:50.012 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: 00:32:50.012 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:50.012 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:50.012 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGU0MjE5MDNlMWMyYTAzZTY5MjA5MzE2MWEyNTQ0MTVmNTNjNTBhZjFhMjdhNWNkZAf89Q==: 00:32:50.012 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: ]] 00:32:50.012 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: 00:32:50.012 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:32:50.012 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.012 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:50.012 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:50.012 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:50.012 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.012 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:50.012 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.012 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.269 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.269 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.269 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.269 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.269 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.269 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.269 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.269 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.269 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.269 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.269 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.269 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.269 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:50.269 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.269 01:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.202 nvme0n1 00:32:51.202 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.202 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.202 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.202 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.202 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.202 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.203 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.203 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.203 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.203 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.203 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.203 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.203 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:32:51.203 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.203 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:51.203 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:51.203 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:51.203 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjRlYjhmMmQ4MjgyM2FkOTUxMzdiOTgyMWVlMmUyMGUyZWI5ZDYxZGJmMzU0ZTZjYmJkZDdiNDc4MDczZmU2ZvVmQYI=: 00:32:51.203 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:51.203 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:51.203 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:51.203 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjRlYjhmMmQ4MjgyM2FkOTUxMzdiOTgyMWVlMmUyMGUyZWI5ZDYxZGJmMzU0ZTZjYmJkZDdiNDc4MDczZmU2ZvVmQYI=: 00:32:51.203 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:51.203 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:32:51.203 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.203 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:51.203 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:51.203 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:51.203 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.203 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:51.203 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.203 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.203 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.203 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.203 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.203 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.203 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.203 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.203 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.203 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.203 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.203 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.203 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.203 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.203 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:51.203 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.203 01:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.162 nvme0n1 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyNDNhOWM5MzQyZGVlZWJmYWVjODZmNjIwZmFiODJJ0+8u: 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyNDNhOWM5MzQyZGVlZWJmYWVjODZmNjIwZmFiODJJ0+8u: 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: ]] 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.162 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.424 nvme0n1 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyZDU0NDI0NWVjMzFkMzA3NTAzNDBkMTJkNjRjMDllZjllNGYzM2FiMTY5NmFh9SUItg==: 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyZDU0NDI0NWVjMzFkMzA3NTAzNDBkMTJkNjRjMDllZjllNGYzM2FiMTY5NmFh9SUItg==: 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: ]] 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.424 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.425 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.425 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:52.425 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.425 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.682 nvme0n1 00:32:52.682 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.682 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.682 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.682 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.682 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.682 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.683 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.683 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.683 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.683 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.683 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.683 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.683 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:32:52.683 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.683 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:52.683 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:52.683 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:52.683 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTMyMDJlNjk5NGJiNTZlZTVhZThkZDE3YjVjZTZkNjRjjo9K: 00:32:52.683 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: 00:32:52.683 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:52.683 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:52.683 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTMyMDJlNjk5NGJiNTZlZTVhZThkZDE3YjVjZTZkNjRjjo9K: 00:32:52.683 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: ]] 00:32:52.683 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: 00:32:52.683 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:32:52.683 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.683 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:52.683 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:52.683 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:52.683 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.683 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:52.683 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.683 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.683 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.683 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.683 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.683 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.683 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.683 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.683 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.683 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.683 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.683 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.683 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.683 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.683 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:52.683 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.683 01:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.683 nvme0n1 00:32:52.683 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.683 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.683 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.683 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.683 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.683 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.941 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.941 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.941 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.941 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.941 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.941 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.941 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:32:52.941 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.941 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:52.941 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:52.941 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:52.941 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGU0MjE5MDNlMWMyYTAzZTY5MjA5MzE2MWEyNTQ0MTVmNTNjNTBhZjFhMjdhNWNkZAf89Q==: 00:32:52.941 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: 00:32:52.941 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:52.941 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:52.941 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGU0MjE5MDNlMWMyYTAzZTY5MjA5MzE2MWEyNTQ0MTVmNTNjNTBhZjFhMjdhNWNkZAf89Q==: 00:32:52.941 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: ]] 00:32:52.941 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: 00:32:52.941 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:32:52.941 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.941 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:52.941 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:52.941 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:52.941 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.941 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:52.941 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.941 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.941 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.941 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.941 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.941 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.941 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.941 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.941 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.941 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.941 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.941 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.941 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.942 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.942 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:52.942 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.942 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.942 nvme0n1 00:32:52.942 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.942 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.942 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.942 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.942 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.942 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.942 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.942 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.942 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.942 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjRlYjhmMmQ4MjgyM2FkOTUxMzdiOTgyMWVlMmUyMGUyZWI5ZDYxZGJmMzU0ZTZjYmJkZDdiNDc4MDczZmU2ZvVmQYI=: 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjRlYjhmMmQ4MjgyM2FkOTUxMzdiOTgyMWVlMmUyMGUyZWI5ZDYxZGJmMzU0ZTZjYmJkZDdiNDc4MDczZmU2ZvVmQYI=: 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.199 nvme0n1 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyNDNhOWM5MzQyZGVlZWJmYWVjODZmNjIwZmFiODJJ0+8u: 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyNDNhOWM5MzQyZGVlZWJmYWVjODZmNjIwZmFiODJJ0+8u: 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: ]] 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.199 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.456 nvme0n1 00:32:53.456 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.456 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.456 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.456 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.456 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.456 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.456 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.456 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.457 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.457 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.457 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.457 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.457 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:32:53.457 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.457 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:53.457 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:53.457 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:53.457 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyZDU0NDI0NWVjMzFkMzA3NTAzNDBkMTJkNjRjMDllZjllNGYzM2FiMTY5NmFh9SUItg==: 00:32:53.457 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: 00:32:53.457 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:53.457 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:53.457 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyZDU0NDI0NWVjMzFkMzA3NTAzNDBkMTJkNjRjMDllZjllNGYzM2FiMTY5NmFh9SUItg==: 00:32:53.457 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: ]] 00:32:53.457 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: 00:32:53.457 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:32:53.457 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.457 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:53.457 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:53.457 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:53.457 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.457 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:53.457 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.457 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.457 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.457 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.457 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:53.457 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:53.457 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:53.457 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.457 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.457 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:53.457 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:53.457 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:53.457 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:53.457 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:53.457 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:53.457 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.457 01:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.715 nvme0n1 00:32:53.715 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.715 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.715 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.715 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.715 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.715 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.715 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.715 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.715 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.715 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.715 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.715 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.715 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:32:53.715 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.715 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:53.715 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:53.715 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:53.715 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTMyMDJlNjk5NGJiNTZlZTVhZThkZDE3YjVjZTZkNjRjjo9K: 00:32:53.715 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: 00:32:53.715 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:53.715 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:53.715 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTMyMDJlNjk5NGJiNTZlZTVhZThkZDE3YjVjZTZkNjRjjo9K: 00:32:53.715 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: ]] 00:32:53.715 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: 00:32:53.715 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:32:53.715 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.715 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:53.715 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:53.715 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:53.715 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.715 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:53.715 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.715 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.715 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.715 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.715 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:53.715 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:53.715 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:53.716 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.716 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.716 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:53.716 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:53.716 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:53.716 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:53.716 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:53.716 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:53.716 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.716 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.974 nvme0n1 00:32:53.974 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.974 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.974 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.974 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.974 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.974 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.974 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.974 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.974 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.974 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.974 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.974 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.974 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:32:53.974 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.974 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:53.974 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:53.974 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:53.974 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGU0MjE5MDNlMWMyYTAzZTY5MjA5MzE2MWEyNTQ0MTVmNTNjNTBhZjFhMjdhNWNkZAf89Q==: 00:32:53.974 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: 00:32:53.974 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:53.974 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:53.974 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGU0MjE5MDNlMWMyYTAzZTY5MjA5MzE2MWEyNTQ0MTVmNTNjNTBhZjFhMjdhNWNkZAf89Q==: 00:32:53.974 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: ]] 00:32:53.974 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: 00:32:53.974 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:32:53.974 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.974 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:53.974 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:53.974 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:53.974 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.974 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:53.974 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.974 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.974 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.974 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.974 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:53.974 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:53.974 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:53.974 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.974 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.975 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:53.975 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:53.975 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:53.975 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:53.975 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:53.975 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:53.975 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.975 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.233 nvme0n1 00:32:54.233 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:54.233 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.233 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.233 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:54.233 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.233 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:54.233 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.233 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.233 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:54.233 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.233 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:54.233 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.233 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:32:54.233 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.233 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:54.233 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:54.233 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:54.233 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjRlYjhmMmQ4MjgyM2FkOTUxMzdiOTgyMWVlMmUyMGUyZWI5ZDYxZGJmMzU0ZTZjYmJkZDdiNDc4MDczZmU2ZvVmQYI=: 00:32:54.233 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:54.233 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:54.233 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:54.233 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjRlYjhmMmQ4MjgyM2FkOTUxMzdiOTgyMWVlMmUyMGUyZWI5ZDYxZGJmMzU0ZTZjYmJkZDdiNDc4MDczZmU2ZvVmQYI=: 00:32:54.233 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:54.233 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:32:54.233 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.233 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:54.233 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:54.233 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:54.233 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.233 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:54.233 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:54.233 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.233 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:54.233 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.233 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.233 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.233 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.233 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.233 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.233 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.233 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.234 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.234 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.234 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.234 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:54.234 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:54.234 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.491 nvme0n1 00:32:54.491 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:54.491 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.491 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.491 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:54.491 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.491 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:54.491 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.491 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.491 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:54.491 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.491 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:54.491 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:54.491 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.491 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:32:54.491 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.491 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:54.492 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:54.492 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:54.492 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyNDNhOWM5MzQyZGVlZWJmYWVjODZmNjIwZmFiODJJ0+8u: 00:32:54.492 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: 00:32:54.492 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:54.492 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:54.492 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyNDNhOWM5MzQyZGVlZWJmYWVjODZmNjIwZmFiODJJ0+8u: 00:32:54.492 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: ]] 00:32:54.492 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: 00:32:54.492 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:32:54.492 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.492 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:54.492 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:54.492 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:54.492 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.492 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:54.492 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:54.492 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.492 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:54.492 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.492 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.492 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.492 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.492 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.492 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.492 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.492 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.492 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.492 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.492 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.492 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:54.492 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:54.492 01:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.057 nvme0n1 00:32:55.057 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.057 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.057 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.057 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.057 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.057 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.057 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.057 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.058 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.058 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.058 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.058 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.058 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:32:55.058 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.058 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:55.058 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:55.058 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:55.058 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyZDU0NDI0NWVjMzFkMzA3NTAzNDBkMTJkNjRjMDllZjllNGYzM2FiMTY5NmFh9SUItg==: 00:32:55.058 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: 00:32:55.058 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:55.058 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:55.058 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyZDU0NDI0NWVjMzFkMzA3NTAzNDBkMTJkNjRjMDllZjllNGYzM2FiMTY5NmFh9SUItg==: 00:32:55.058 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: ]] 00:32:55.058 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: 00:32:55.058 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:32:55.058 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.058 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:55.058 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:55.058 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:55.058 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.058 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:55.058 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.058 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.058 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.058 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.058 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.058 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.058 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.058 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.058 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.058 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.058 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.058 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.058 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.058 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.058 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:55.058 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.058 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.316 nvme0n1 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTMyMDJlNjk5NGJiNTZlZTVhZThkZDE3YjVjZTZkNjRjjo9K: 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTMyMDJlNjk5NGJiNTZlZTVhZThkZDE3YjVjZTZkNjRjjo9K: 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: ]] 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.316 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.574 nvme0n1 00:32:55.574 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.574 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.574 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.574 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.574 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.574 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.574 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.574 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.574 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.574 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.574 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.574 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.574 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:32:55.574 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.574 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:55.574 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:55.574 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:55.574 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGU0MjE5MDNlMWMyYTAzZTY5MjA5MzE2MWEyNTQ0MTVmNTNjNTBhZjFhMjdhNWNkZAf89Q==: 00:32:55.574 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: 00:32:55.574 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:55.574 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:55.574 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGU0MjE5MDNlMWMyYTAzZTY5MjA5MzE2MWEyNTQ0MTVmNTNjNTBhZjFhMjdhNWNkZAf89Q==: 00:32:55.574 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: ]] 00:32:55.574 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: 00:32:55.574 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:32:55.574 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.574 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:55.574 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:55.574 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:55.574 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.574 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:55.574 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.574 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.574 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.574 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.574 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.574 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.574 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.575 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.575 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.575 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.575 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.575 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.575 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.575 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.575 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:55.575 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.575 01:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.142 nvme0n1 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjRlYjhmMmQ4MjgyM2FkOTUxMzdiOTgyMWVlMmUyMGUyZWI5ZDYxZGJmMzU0ZTZjYmJkZDdiNDc4MDczZmU2ZvVmQYI=: 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjRlYjhmMmQ4MjgyM2FkOTUxMzdiOTgyMWVlMmUyMGUyZWI5ZDYxZGJmMzU0ZTZjYmJkZDdiNDc4MDczZmU2ZvVmQYI=: 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.142 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.400 nvme0n1 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyNDNhOWM5MzQyZGVlZWJmYWVjODZmNjIwZmFiODJJ0+8u: 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyNDNhOWM5MzQyZGVlZWJmYWVjODZmNjIwZmFiODJJ0+8u: 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: ]] 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.400 01:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.966 nvme0n1 00:32:56.966 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.966 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.966 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyZDU0NDI0NWVjMzFkMzA3NTAzNDBkMTJkNjRjMDllZjllNGYzM2FiMTY5NmFh9SUItg==: 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyZDU0NDI0NWVjMzFkMzA3NTAzNDBkMTJkNjRjMDllZjllNGYzM2FiMTY5NmFh9SUItg==: 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: ]] 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.967 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.533 nvme0n1 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTMyMDJlNjk5NGJiNTZlZTVhZThkZDE3YjVjZTZkNjRjjo9K: 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTMyMDJlNjk5NGJiNTZlZTVhZThkZDE3YjVjZTZkNjRjjo9K: 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: ]] 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.533 01:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.098 nvme0n1 00:32:58.098 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.098 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.098 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.098 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.098 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.098 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.098 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.098 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.098 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.098 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.356 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.356 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.356 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:32:58.356 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.356 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:58.356 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:58.356 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:58.356 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGU0MjE5MDNlMWMyYTAzZTY5MjA5MzE2MWEyNTQ0MTVmNTNjNTBhZjFhMjdhNWNkZAf89Q==: 00:32:58.356 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: 00:32:58.356 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:58.356 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:58.356 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGU0MjE5MDNlMWMyYTAzZTY5MjA5MzE2MWEyNTQ0MTVmNTNjNTBhZjFhMjdhNWNkZAf89Q==: 00:32:58.356 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: ]] 00:32:58.356 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: 00:32:58.356 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:32:58.356 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.356 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:58.356 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:58.356 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:58.356 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.356 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:58.356 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.356 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.356 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.356 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.356 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.356 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.356 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.356 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.356 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.356 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.356 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.356 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.356 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.356 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.356 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:58.356 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.356 01:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.922 nvme0n1 00:32:58.922 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.922 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.922 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.922 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.922 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.922 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.922 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.922 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.922 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.922 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.922 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.922 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.922 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:32:58.922 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.922 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:58.923 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:58.923 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:58.923 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjRlYjhmMmQ4MjgyM2FkOTUxMzdiOTgyMWVlMmUyMGUyZWI5ZDYxZGJmMzU0ZTZjYmJkZDdiNDc4MDczZmU2ZvVmQYI=: 00:32:58.923 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:58.923 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:58.923 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:58.923 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjRlYjhmMmQ4MjgyM2FkOTUxMzdiOTgyMWVlMmUyMGUyZWI5ZDYxZGJmMzU0ZTZjYmJkZDdiNDc4MDczZmU2ZvVmQYI=: 00:32:58.923 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:58.923 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:32:58.923 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.923 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:58.923 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:58.923 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:58.923 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.923 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:58.923 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.923 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.923 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.923 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.923 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.923 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.923 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.923 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.923 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.923 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.923 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.923 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.923 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.923 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.923 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:58.923 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.923 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.488 nvme0n1 00:32:59.488 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.488 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.488 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.488 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.488 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.488 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.488 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.488 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.488 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.488 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.488 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.488 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:59.488 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.488 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:32:59.488 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.488 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:59.488 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:59.488 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:59.488 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkyNDNhOWM5MzQyZGVlZWJmYWVjODZmNjIwZmFiODJJ0+8u: 00:32:59.488 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: 00:32:59.488 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:59.488 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:59.488 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkyNDNhOWM5MzQyZGVlZWJmYWVjODZmNjIwZmFiODJJ0+8u: 00:32:59.488 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: ]] 00:32:59.488 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjExNDhlMDVhZWUzMmRiYWMxYjE4YmM5ZjA1YTI3Y2MyNzg4NGRhNGY5NmM2YjFhOTZjMzc3ZGZiY2VmNTg1ZE9lVG8=: 00:32:59.488 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:32:59.488 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.488 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:59.488 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:59.488 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:59.488 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.488 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:59.489 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.489 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.489 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.489 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.489 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.489 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.489 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.489 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.489 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.489 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.489 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.489 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.489 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.489 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.489 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:59.489 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.489 01:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.423 nvme0n1 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyZDU0NDI0NWVjMzFkMzA3NTAzNDBkMTJkNjRjMDllZjllNGYzM2FiMTY5NmFh9SUItg==: 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyZDU0NDI0NWVjMzFkMzA3NTAzNDBkMTJkNjRjMDllZjllNGYzM2FiMTY5NmFh9SUItg==: 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: ]] 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.423 01:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.356 nvme0n1 00:33:01.614 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.614 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.614 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.614 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.614 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.614 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.614 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.614 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.614 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.614 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.614 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.614 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.614 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:33:01.614 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.614 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:01.614 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:01.614 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:01.614 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTMyMDJlNjk5NGJiNTZlZTVhZThkZDE3YjVjZTZkNjRjjo9K: 00:33:01.614 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: 00:33:01.614 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:01.614 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:01.614 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTMyMDJlNjk5NGJiNTZlZTVhZThkZDE3YjVjZTZkNjRjjo9K: 00:33:01.614 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: ]] 00:33:01.614 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDUyYTYzYTAzNDljMjU3N2Q3NWMzMjhlMWI0MjExZjPPEoXR: 00:33:01.614 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:33:01.614 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.614 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:01.614 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:01.614 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:01.614 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.614 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:01.614 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.614 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.614 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.615 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.615 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.615 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.615 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.615 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.615 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.615 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.615 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.615 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.615 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.615 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.615 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:01.615 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.615 01:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.550 nvme0n1 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGU0MjE5MDNlMWMyYTAzZTY5MjA5MzE2MWEyNTQ0MTVmNTNjNTBhZjFhMjdhNWNkZAf89Q==: 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGU0MjE5MDNlMWMyYTAzZTY5MjA5MzE2MWEyNTQ0MTVmNTNjNTBhZjFhMjdhNWNkZAf89Q==: 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: ]] 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWNkNzA3Y2VjZGUwZDQ2YmFlZDZkMWQwMTUwZDhmZjjOlVD6: 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.550 01:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.481 nvme0n1 00:33:03.481 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.481 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.481 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.481 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.481 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.481 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.481 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.481 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.481 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.481 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.481 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.481 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.481 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:33:03.481 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.481 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:03.481 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:03.481 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:03.481 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjRlYjhmMmQ4MjgyM2FkOTUxMzdiOTgyMWVlMmUyMGUyZWI5ZDYxZGJmMzU0ZTZjYmJkZDdiNDc4MDczZmU2ZvVmQYI=: 00:33:03.482 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:03.482 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:03.482 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:03.482 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjRlYjhmMmQ4MjgyM2FkOTUxMzdiOTgyMWVlMmUyMGUyZWI5ZDYxZGJmMzU0ZTZjYmJkZDdiNDc4MDczZmU2ZvVmQYI=: 00:33:03.482 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:03.482 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:33:03.482 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.482 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:03.482 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:03.482 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:03.482 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.482 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:03.482 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.482 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.482 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.482 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.482 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.482 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.482 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.482 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.482 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.482 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.482 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.482 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.482 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.482 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.482 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:03.482 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.482 01:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.413 nvme0n1 00:33:04.413 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.413 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.413 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.413 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.413 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.413 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.413 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.413 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.413 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.413 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.671 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.671 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:04.671 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.671 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:04.671 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:04.671 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:04.671 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyZDU0NDI0NWVjMzFkMzA3NTAzNDBkMTJkNjRjMDllZjllNGYzM2FiMTY5NmFh9SUItg==: 00:33:04.671 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: 00:33:04.671 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:04.671 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:04.671 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyZDU0NDI0NWVjMzFkMzA3NTAzNDBkMTJkNjRjMDllZjllNGYzM2FiMTY5NmFh9SUItg==: 00:33:04.671 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: ]] 00:33:04.671 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTFhNmM5M2MxMTA4NjA2OTk1YjE4MWNiNmU4NmYyMzljZjE2NDBkMTk3ZmVkYjVhNdJ09w==: 00:33:04.671 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:04.671 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.671 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.671 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.671 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:33:04.671 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.671 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.671 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.671 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.671 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.671 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.671 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.671 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.671 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.671 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.671 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:04.671 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:33:04.671 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:04.671 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:04.671 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:04.671 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:04.671 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:04.671 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:04.671 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.671 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.671 request: 00:33:04.671 { 00:33:04.671 "name": "nvme0", 00:33:04.671 "trtype": "tcp", 00:33:04.671 "traddr": "10.0.0.1", 00:33:04.671 "adrfam": "ipv4", 00:33:04.671 "trsvcid": "4420", 00:33:04.671 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:04.671 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:04.671 "prchk_reftag": false, 00:33:04.671 "prchk_guard": false, 00:33:04.671 "hdgst": false, 00:33:04.671 "ddgst": false, 00:33:04.671 "method": "bdev_nvme_attach_controller", 00:33:04.671 "req_id": 1 00:33:04.671 } 00:33:04.671 Got JSON-RPC error response 00:33:04.671 response: 00:33:04.671 { 00:33:04.671 "code": -5, 00:33:04.671 "message": "Input/output error" 00:33:04.671 } 00:33:04.672 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:04.672 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:33:04.672 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:04.672 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:04.672 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:04.672 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.672 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.672 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:33:04.672 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.672 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.672 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:33:04.672 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:33:04.672 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.672 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.672 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.672 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.672 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.672 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.672 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.672 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.672 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.672 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.672 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:04.672 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:33:04.672 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:04.672 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:04.672 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:04.672 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:04.672 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:04.672 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:04.672 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.672 01:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.672 request: 00:33:04.672 { 00:33:04.672 "name": "nvme0", 00:33:04.672 "trtype": "tcp", 00:33:04.672 "traddr": "10.0.0.1", 00:33:04.672 "adrfam": "ipv4", 00:33:04.672 "trsvcid": "4420", 00:33:04.672 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:04.672 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:04.672 "prchk_reftag": false, 00:33:04.672 "prchk_guard": false, 00:33:04.672 "hdgst": false, 00:33:04.672 "ddgst": false, 00:33:04.672 "dhchap_key": "key2", 00:33:04.672 "method": "bdev_nvme_attach_controller", 00:33:04.672 "req_id": 1 00:33:04.672 } 00:33:04.672 Got JSON-RPC error response 00:33:04.672 response: 00:33:04.672 { 00:33:04.672 "code": -5, 00:33:04.672 "message": "Input/output error" 00:33:04.672 } 00:33:04.672 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:04.672 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:33:04.672 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:04.672 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:04.672 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:04.672 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.672 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.672 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:33:04.672 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.672 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.930 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:33:04.930 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:33:04.930 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.930 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.930 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.930 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.930 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.930 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.930 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.930 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.930 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.930 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.930 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:04.930 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:33:04.930 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:04.930 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:04.930 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:04.930 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:04.930 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:04.930 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:04.930 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.930 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.930 request: 00:33:04.930 { 00:33:04.930 "name": "nvme0", 00:33:04.930 "trtype": "tcp", 00:33:04.930 "traddr": "10.0.0.1", 00:33:04.930 "adrfam": "ipv4", 00:33:04.930 "trsvcid": "4420", 00:33:04.930 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:04.930 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:04.930 "prchk_reftag": false, 00:33:04.930 "prchk_guard": false, 00:33:04.930 "hdgst": false, 00:33:04.930 "ddgst": false, 00:33:04.930 "dhchap_key": "key1", 00:33:04.930 "dhchap_ctrlr_key": "ckey2", 00:33:04.930 "method": "bdev_nvme_attach_controller", 00:33:04.930 "req_id": 1 00:33:04.930 } 00:33:04.930 Got JSON-RPC error response 00:33:04.930 response: 00:33:04.930 { 00:33:04.930 "code": -5, 00:33:04.930 "message": "Input/output error" 00:33:04.930 } 00:33:04.930 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:04.930 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:33:04.930 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:04.931 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:04.931 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:04.931 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:33:04.931 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:33:04.931 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:33:04.931 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:04.931 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:33:04.931 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:04.931 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:33:04.931 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:04.931 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:04.931 rmmod nvme_tcp 00:33:04.931 rmmod nvme_fabrics 00:33:04.931 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:04.931 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:33:04.931 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:33:04.931 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1964260 ']' 00:33:04.931 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1964260 00:33:04.931 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 1964260 ']' 00:33:04.931 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 1964260 00:33:04.931 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:33:04.931 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:04.931 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1964260 00:33:04.931 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:04.931 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:04.931 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1964260' 00:33:04.931 killing process with pid 1964260 00:33:04.931 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 1964260 00:33:04.931 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 1964260 00:33:05.190 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:05.190 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:05.190 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:05.190 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:05.190 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:05.190 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:05.190 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:05.190 01:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:07.089 01:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:07.089 01:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:07.089 01:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:07.089 01:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:33:07.089 01:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:33:07.089 01:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:33:07.089 01:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:07.089 01:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:07.089 01:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:07.089 01:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:07.089 01:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:07.089 01:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:07.346 01:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:08.720 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:08.720 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:08.720 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:08.720 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:08.720 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:08.720 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:08.720 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:08.720 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:08.720 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:08.720 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:08.720 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:08.720 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:08.720 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:08.720 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:08.720 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:08.720 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:09.652 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:09.653 01:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.5Gf /tmp/spdk.key-null.2xJ /tmp/spdk.key-sha256.PlB /tmp/spdk.key-sha384.vJv /tmp/spdk.key-sha512.7u8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:33:09.653 01:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:10.625 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:10.625 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:10.625 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:10.625 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:10.625 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:10.625 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:10.625 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:10.625 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:10.625 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:10.625 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:10.625 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:10.625 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:10.625 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:10.625 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:10.625 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:10.625 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:10.625 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:10.884 00:33:10.884 real 0m49.801s 00:33:10.884 user 0m47.624s 00:33:10.884 sys 0m5.650s 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.884 ************************************ 00:33:10.884 END TEST nvmf_auth_host 00:33:10.884 ************************************ 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.884 ************************************ 00:33:10.884 START TEST nvmf_digest 00:33:10.884 ************************************ 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:10.884 * Looking for test storage... 00:33:10.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:33:10.884 01:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:13.417 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:13.417 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:33:13.417 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:13.417 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:13.417 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:13.417 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:13.417 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:13.417 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:13.418 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:13.418 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:13.418 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:13.418 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:13.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:13.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:33:13.418 00:33:13.418 --- 10.0.0.2 ping statistics --- 00:33:13.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:13.418 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:13.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:13.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:33:13.418 00:33:13.418 --- 10.0.0.1 ping statistics --- 00:33:13.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:13.418 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:13.418 ************************************ 00:33:13.418 START TEST nvmf_digest_clean 00:33:13.418 ************************************ 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:33:13.418 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:33:13.419 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:33:13.419 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:33:13.419 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:33:13.419 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:33:13.419 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:13.419 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:13.419 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:13.419 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1974380 00:33:13.419 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:13.419 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1974380 00:33:13.419 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1974380 ']' 00:33:13.419 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:13.419 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:13.419 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:13.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:13.419 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:13.419 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:13.419 [2024-07-26 01:15:43.495444] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:33:13.419 [2024-07-26 01:15:43.495515] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:13.419 EAL: No free 2048 kB hugepages reported on node 1 00:33:13.419 [2024-07-26 01:15:43.559577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:13.419 [2024-07-26 01:15:43.642560] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:13.419 [2024-07-26 01:15:43.642626] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:13.419 [2024-07-26 01:15:43.642639] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:13.419 [2024-07-26 01:15:43.642650] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:13.419 [2024-07-26 01:15:43.642660] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:13.419 [2024-07-26 01:15:43.642685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:13.419 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:13.419 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:33:13.419 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:13.419 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:13.419 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:13.419 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:13.419 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:33:13.419 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:33:13.419 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:33:13.419 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.419 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:13.677 null0 00:33:13.677 [2024-07-26 01:15:43.886020] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:13.677 [2024-07-26 01:15:43.910284] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:13.677 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.677 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:33:13.677 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:13.677 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:13.677 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:13.677 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:13.677 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:13.677 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:13.677 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1974400 00:33:13.677 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1974400 /var/tmp/bperf.sock 00:33:13.677 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1974400 ']' 00:33:13.677 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:13.677 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:13.677 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:13.677 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:13.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:13.677 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:13.677 01:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:13.677 [2024-07-26 01:15:43.959648] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:33:13.677 [2024-07-26 01:15:43.959733] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1974400 ] 00:33:13.677 EAL: No free 2048 kB hugepages reported on node 1 00:33:13.677 [2024-07-26 01:15:44.017750] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:13.677 [2024-07-26 01:15:44.102615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:13.935 01:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:13.935 01:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:33:13.935 01:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:13.935 01:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:13.935 01:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:14.193 01:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:14.193 01:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:14.451 nvme0n1 00:33:14.451 01:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:14.451 01:15:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:14.710 Running I/O for 2 seconds... 00:33:16.609 00:33:16.609 Latency(us) 00:33:16.609 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:16.609 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:16.609 nvme0n1 : 2.01 17902.54 69.93 0.00 0.00 7137.64 3373.89 17961.72 00:33:16.609 =================================================================================================================== 00:33:16.609 Total : 17902.54 69.93 0.00 0.00 7137.64 3373.89 17961.72 00:33:16.609 0 00:33:16.609 01:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:16.609 01:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:16.609 01:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:16.609 01:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:16.609 | select(.opcode=="crc32c") 00:33:16.609 | "\(.module_name) \(.executed)"' 00:33:16.609 01:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:16.867 01:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:16.867 01:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:16.867 01:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:16.867 01:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:16.867 01:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1974400 00:33:16.867 01:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1974400 ']' 00:33:16.867 01:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1974400 00:33:16.867 01:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:33:16.867 01:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:16.867 01:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1974400 00:33:16.867 01:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:16.867 01:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:16.867 01:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1974400' 00:33:16.867 killing process with pid 1974400 00:33:16.867 01:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1974400 00:33:16.867 Received shutdown signal, test time was about 2.000000 seconds 00:33:16.867 00:33:16.867 Latency(us) 00:33:16.867 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:16.867 =================================================================================================================== 00:33:16.867 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:16.867 01:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1974400 00:33:17.125 01:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:33:17.125 01:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:17.125 01:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:17.125 01:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:17.125 01:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:17.125 01:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:17.125 01:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:17.125 01:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1974806 00:33:17.125 01:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:17.125 01:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1974806 /var/tmp/bperf.sock 00:33:17.125 01:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1974806 ']' 00:33:17.125 01:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:17.125 01:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:17.125 01:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:17.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:17.126 01:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:17.126 01:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:17.384 [2024-07-26 01:15:47.554695] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:33:17.384 [2024-07-26 01:15:47.554787] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1974806 ] 00:33:17.384 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:17.384 Zero copy mechanism will not be used. 00:33:17.384 EAL: No free 2048 kB hugepages reported on node 1 00:33:17.384 [2024-07-26 01:15:47.615840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:17.384 [2024-07-26 01:15:47.710736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:17.384 01:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:17.384 01:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:33:17.384 01:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:17.384 01:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:17.384 01:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:17.951 01:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:17.951 01:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:18.209 nvme0n1 00:33:18.209 01:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:18.209 01:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:18.467 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:18.467 Zero copy mechanism will not be used. 00:33:18.467 Running I/O for 2 seconds... 00:33:20.365 00:33:20.365 Latency(us) 00:33:20.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:20.365 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:20.365 nvme0n1 : 2.00 4405.28 550.66 0.00 0.00 3627.70 1347.13 5048.70 00:33:20.365 =================================================================================================================== 00:33:20.365 Total : 4405.28 550.66 0.00 0.00 3627.70 1347.13 5048.70 00:33:20.365 0 00:33:20.365 01:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:20.365 01:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:20.365 01:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:20.365 01:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:20.365 01:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:20.365 | select(.opcode=="crc32c") 00:33:20.365 | "\(.module_name) \(.executed)"' 00:33:20.623 01:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:20.623 01:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:20.623 01:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:20.623 01:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:20.623 01:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1974806 00:33:20.623 01:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1974806 ']' 00:33:20.623 01:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1974806 00:33:20.623 01:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:33:20.623 01:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:20.623 01:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1974806 00:33:20.623 01:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:20.623 01:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:20.623 01:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1974806' 00:33:20.623 killing process with pid 1974806 00:33:20.623 01:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1974806 00:33:20.623 Received shutdown signal, test time was about 2.000000 seconds 00:33:20.623 00:33:20.623 Latency(us) 00:33:20.623 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:20.623 =================================================================================================================== 00:33:20.623 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:20.623 01:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1974806 00:33:20.881 01:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:33:20.881 01:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:20.881 01:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:20.881 01:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:20.882 01:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:20.882 01:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:20.882 01:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:20.882 01:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1975285 00:33:20.882 01:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1975285 /var/tmp/bperf.sock 00:33:20.882 01:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:20.882 01:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1975285 ']' 00:33:20.882 01:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:20.882 01:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:20.882 01:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:20.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:20.882 01:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:20.882 01:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:20.882 [2024-07-26 01:15:51.219179] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:33:20.882 [2024-07-26 01:15:51.219268] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1975285 ] 00:33:20.882 EAL: No free 2048 kB hugepages reported on node 1 00:33:20.882 [2024-07-26 01:15:51.276480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:21.140 [2024-07-26 01:15:51.359631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:21.140 01:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:21.140 01:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:33:21.140 01:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:21.140 01:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:21.140 01:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:21.398 01:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:21.398 01:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:21.656 nvme0n1 00:33:21.656 01:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:21.656 01:15:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:21.914 Running I/O for 2 seconds... 00:33:23.813 00:33:23.813 Latency(us) 00:33:23.813 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:23.813 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:23.813 nvme0n1 : 2.01 19121.98 74.70 0.00 0.00 6677.91 3131.16 9757.58 00:33:23.813 =================================================================================================================== 00:33:23.813 Total : 19121.98 74.70 0.00 0.00 6677.91 3131.16 9757.58 00:33:23.813 0 00:33:23.813 01:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:23.813 01:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:23.813 01:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:23.813 01:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:23.813 | select(.opcode=="crc32c") 00:33:23.813 | "\(.module_name) \(.executed)"' 00:33:23.813 01:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:24.071 01:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:24.071 01:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:24.071 01:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:24.071 01:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:24.071 01:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1975285 00:33:24.071 01:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1975285 ']' 00:33:24.071 01:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1975285 00:33:24.071 01:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:33:24.071 01:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:24.071 01:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1975285 00:33:24.071 01:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:24.071 01:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:24.071 01:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1975285' 00:33:24.071 killing process with pid 1975285 00:33:24.071 01:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1975285 00:33:24.071 Received shutdown signal, test time was about 2.000000 seconds 00:33:24.071 00:33:24.071 Latency(us) 00:33:24.071 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:24.071 =================================================================================================================== 00:33:24.071 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:24.071 01:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1975285 00:33:24.329 01:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:33:24.329 01:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:24.329 01:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:24.329 01:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:24.329 01:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:24.329 01:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:24.329 01:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:24.329 01:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1975739 00:33:24.329 01:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:24.329 01:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1975739 /var/tmp/bperf.sock 00:33:24.329 01:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1975739 ']' 00:33:24.329 01:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:24.329 01:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:24.329 01:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:24.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:24.329 01:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:24.329 01:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:24.329 [2024-07-26 01:15:54.721575] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:33:24.329 [2024-07-26 01:15:54.721662] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1975739 ] 00:33:24.329 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:24.329 Zero copy mechanism will not be used. 00:33:24.329 EAL: No free 2048 kB hugepages reported on node 1 00:33:24.587 [2024-07-26 01:15:54.783897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:24.587 [2024-07-26 01:15:54.877920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:24.587 01:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:24.587 01:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:33:24.587 01:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:24.587 01:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:24.587 01:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:24.845 01:15:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:24.845 01:15:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:25.410 nvme0n1 00:33:25.410 01:15:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:25.410 01:15:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:25.668 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:25.668 Zero copy mechanism will not be used. 00:33:25.668 Running I/O for 2 seconds... 00:33:27.569 00:33:27.569 Latency(us) 00:33:27.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:27.569 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:27.569 nvme0n1 : 2.00 4219.61 527.45 0.00 0.00 3782.55 3021.94 12379.02 00:33:27.569 =================================================================================================================== 00:33:27.569 Total : 4219.61 527.45 0.00 0.00 3782.55 3021.94 12379.02 00:33:27.569 0 00:33:27.569 01:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:27.569 01:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:27.569 01:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:27.569 01:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:27.569 | select(.opcode=="crc32c") 00:33:27.569 | "\(.module_name) \(.executed)"' 00:33:27.569 01:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:27.827 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:27.827 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:27.827 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:27.828 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:27.828 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1975739 00:33:27.828 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1975739 ']' 00:33:27.828 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1975739 00:33:27.828 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:33:27.828 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:27.828 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1975739 00:33:27.828 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:27.828 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:27.828 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1975739' 00:33:27.828 killing process with pid 1975739 00:33:27.828 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1975739 00:33:27.828 Received shutdown signal, test time was about 2.000000 seconds 00:33:27.828 00:33:27.828 Latency(us) 00:33:27.828 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:27.828 =================================================================================================================== 00:33:27.828 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:27.828 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1975739 00:33:28.086 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1974380 00:33:28.086 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1974380 ']' 00:33:28.086 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1974380 00:33:28.086 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:33:28.086 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:28.086 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1974380 00:33:28.086 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:28.086 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:28.086 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1974380' 00:33:28.086 killing process with pid 1974380 00:33:28.086 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1974380 00:33:28.086 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1974380 00:33:28.344 00:33:28.344 real 0m15.228s 00:33:28.344 user 0m30.236s 00:33:28.344 sys 0m4.106s 00:33:28.344 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:28.344 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:28.344 ************************************ 00:33:28.344 END TEST nvmf_digest_clean 00:33:28.344 ************************************ 00:33:28.344 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:33:28.344 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:28.344 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:28.344 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:28.344 ************************************ 00:33:28.344 START TEST nvmf_digest_error 00:33:28.344 ************************************ 00:33:28.344 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:33:28.344 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:33:28.344 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:28.344 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:28.344 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:28.344 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1976177 00:33:28.344 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:28.344 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1976177 00:33:28.344 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1976177 ']' 00:33:28.344 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:28.344 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:28.344 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:28.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:28.344 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:28.344 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:28.601 [2024-07-26 01:15:58.780019] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:33:28.602 [2024-07-26 01:15:58.780101] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:28.602 EAL: No free 2048 kB hugepages reported on node 1 00:33:28.602 [2024-07-26 01:15:58.842685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:28.602 [2024-07-26 01:15:58.926302] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:28.602 [2024-07-26 01:15:58.926372] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:28.602 [2024-07-26 01:15:58.926386] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:28.602 [2024-07-26 01:15:58.926396] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:28.602 [2024-07-26 01:15:58.926407] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:28.602 [2024-07-26 01:15:58.926436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:28.602 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:28.602 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:33:28.602 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:28.602 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:28.602 01:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:28.602 01:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:28.602 01:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:33:28.602 01:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.602 01:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:28.602 [2024-07-26 01:15:59.006997] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:33:28.602 01:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.602 01:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:33:28.602 01:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:33:28.602 01:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.602 01:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:28.860 null0 00:33:28.860 [2024-07-26 01:15:59.120559] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:28.860 [2024-07-26 01:15:59.144806] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:28.860 01:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.860 01:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:33:28.860 01:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:28.860 01:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:28.860 01:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:28.860 01:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:28.860 01:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1976311 00:33:28.860 01:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:33:28.860 01:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1976311 /var/tmp/bperf.sock 00:33:28.860 01:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1976311 ']' 00:33:28.860 01:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:28.860 01:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:28.860 01:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:28.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:28.860 01:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:28.860 01:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:28.860 [2024-07-26 01:15:59.194878] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:33:28.860 [2024-07-26 01:15:59.194956] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1976311 ] 00:33:28.860 EAL: No free 2048 kB hugepages reported on node 1 00:33:28.860 [2024-07-26 01:15:59.260511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:29.119 [2024-07-26 01:15:59.351587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:29.119 01:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:29.119 01:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:33:29.119 01:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:29.119 01:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:29.382 01:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:29.382 01:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.382 01:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:29.382 01:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.382 01:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:29.382 01:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:29.642 nvme0n1 00:33:29.642 01:16:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:29.642 01:16:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.642 01:16:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:29.642 01:16:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.642 01:16:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:29.642 01:16:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:29.906 Running I/O for 2 seconds... 00:33:29.906 [2024-07-26 01:16:00.185625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:29.906 [2024-07-26 01:16:00.185679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.906 [2024-07-26 01:16:00.185701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.906 [2024-07-26 01:16:00.203686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:29.906 [2024-07-26 01:16:00.203729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.906 [2024-07-26 01:16:00.203754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.906 [2024-07-26 01:16:00.221599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:29.906 [2024-07-26 01:16:00.221638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.906 [2024-07-26 01:16:00.221657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.906 [2024-07-26 01:16:00.241284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:29.906 [2024-07-26 01:16:00.241320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.906 [2024-07-26 01:16:00.241353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.906 [2024-07-26 01:16:00.261661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:29.906 [2024-07-26 01:16:00.261698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.906 [2024-07-26 01:16:00.261717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.907 [2024-07-26 01:16:00.279471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:29.907 [2024-07-26 01:16:00.279508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.907 [2024-07-26 01:16:00.279527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.907 [2024-07-26 01:16:00.299937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:29.907 [2024-07-26 01:16:00.299972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.907 [2024-07-26 01:16:00.300001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.907 [2024-07-26 01:16:00.320291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:29.907 [2024-07-26 01:16:00.320322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.907 [2024-07-26 01:16:00.320339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.206 [2024-07-26 01:16:00.334187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.206 [2024-07-26 01:16:00.334218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.206 [2024-07-26 01:16:00.334235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.206 [2024-07-26 01:16:00.353917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.206 [2024-07-26 01:16:00.353953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.206 [2024-07-26 01:16:00.353972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.206 [2024-07-26 01:16:00.374378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.206 [2024-07-26 01:16:00.374425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.206 [2024-07-26 01:16:00.374445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.206 [2024-07-26 01:16:00.392147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.206 [2024-07-26 01:16:00.392179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.206 [2024-07-26 01:16:00.392197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.206 [2024-07-26 01:16:00.409471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.206 [2024-07-26 01:16:00.409507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.206 [2024-07-26 01:16:00.409526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.206 [2024-07-26 01:16:00.427671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.206 [2024-07-26 01:16:00.427707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.206 [2024-07-26 01:16:00.427727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.206 [2024-07-26 01:16:00.447834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.206 [2024-07-26 01:16:00.447870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.206 [2024-07-26 01:16:00.447890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.206 [2024-07-26 01:16:00.465351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.206 [2024-07-26 01:16:00.465387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.206 [2024-07-26 01:16:00.465407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.206 [2024-07-26 01:16:00.480135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.206 [2024-07-26 01:16:00.480164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.206 [2024-07-26 01:16:00.480180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.206 [2024-07-26 01:16:00.498738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.206 [2024-07-26 01:16:00.498775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.206 [2024-07-26 01:16:00.498795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.206 [2024-07-26 01:16:00.516272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.206 [2024-07-26 01:16:00.516302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:17617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.206 [2024-07-26 01:16:00.516318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.206 [2024-07-26 01:16:00.536614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.206 [2024-07-26 01:16:00.536650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.206 [2024-07-26 01:16:00.536670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.206 [2024-07-26 01:16:00.554521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.206 [2024-07-26 01:16:00.554558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.206 [2024-07-26 01:16:00.554577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.206 [2024-07-26 01:16:00.575842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.206 [2024-07-26 01:16:00.575878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.206 [2024-07-26 01:16:00.575898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.206 [2024-07-26 01:16:00.596485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.206 [2024-07-26 01:16:00.596521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.206 [2024-07-26 01:16:00.596540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.207 [2024-07-26 01:16:00.613689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.207 [2024-07-26 01:16:00.613725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.207 [2024-07-26 01:16:00.613755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.472 [2024-07-26 01:16:00.631937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.472 [2024-07-26 01:16:00.631972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.472 [2024-07-26 01:16:00.631992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.472 [2024-07-26 01:16:00.646419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.472 [2024-07-26 01:16:00.646454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.472 [2024-07-26 01:16:00.646473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.472 [2024-07-26 01:16:00.664846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.472 [2024-07-26 01:16:00.664883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.472 [2024-07-26 01:16:00.664903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.472 [2024-07-26 01:16:00.684683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.472 [2024-07-26 01:16:00.684721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.472 [2024-07-26 01:16:00.684741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.472 [2024-07-26 01:16:00.704397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.472 [2024-07-26 01:16:00.704433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.472 [2024-07-26 01:16:00.704452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.472 [2024-07-26 01:16:00.725121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.472 [2024-07-26 01:16:00.725151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.472 [2024-07-26 01:16:00.725168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.472 [2024-07-26 01:16:00.737985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.472 [2024-07-26 01:16:00.738020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.472 [2024-07-26 01:16:00.738039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.472 [2024-07-26 01:16:00.756322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.472 [2024-07-26 01:16:00.756352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.472 [2024-07-26 01:16:00.756369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.472 [2024-07-26 01:16:00.777417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.472 [2024-07-26 01:16:00.777464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.472 [2024-07-26 01:16:00.777484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.472 [2024-07-26 01:16:00.796464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.472 [2024-07-26 01:16:00.796500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.472 [2024-07-26 01:16:00.796519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.472 [2024-07-26 01:16:00.809836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.472 [2024-07-26 01:16:00.809871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.472 [2024-07-26 01:16:00.809890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.472 [2024-07-26 01:16:00.831250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.472 [2024-07-26 01:16:00.831280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.472 [2024-07-26 01:16:00.831297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.472 [2024-07-26 01:16:00.851753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.472 [2024-07-26 01:16:00.851790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.472 [2024-07-26 01:16:00.851809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.472 [2024-07-26 01:16:00.871322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.472 [2024-07-26 01:16:00.871368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.472 [2024-07-26 01:16:00.871388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.472 [2024-07-26 01:16:00.891596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.472 [2024-07-26 01:16:00.891632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.472 [2024-07-26 01:16:00.891651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.730 [2024-07-26 01:16:00.909267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.731 [2024-07-26 01:16:00.909298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.731 [2024-07-26 01:16:00.909315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.731 [2024-07-26 01:16:00.923579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.731 [2024-07-26 01:16:00.923615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.731 [2024-07-26 01:16:00.923635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.731 [2024-07-26 01:16:00.940976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.731 [2024-07-26 01:16:00.941012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.731 [2024-07-26 01:16:00.941031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.731 [2024-07-26 01:16:00.959941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.731 [2024-07-26 01:16:00.959978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.731 [2024-07-26 01:16:00.959996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.731 [2024-07-26 01:16:00.980866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.731 [2024-07-26 01:16:00.980903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.731 [2024-07-26 01:16:00.980923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.731 [2024-07-26 01:16:01.001273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.731 [2024-07-26 01:16:01.001319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.731 [2024-07-26 01:16:01.001336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.731 [2024-07-26 01:16:01.018723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.731 [2024-07-26 01:16:01.018759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.731 [2024-07-26 01:16:01.018778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.731 [2024-07-26 01:16:01.038783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.731 [2024-07-26 01:16:01.038819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.731 [2024-07-26 01:16:01.038839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.731 [2024-07-26 01:16:01.055834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.731 [2024-07-26 01:16:01.055871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.731 [2024-07-26 01:16:01.055890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.731 [2024-07-26 01:16:01.076269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.731 [2024-07-26 01:16:01.076300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.731 [2024-07-26 01:16:01.076316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.731 [2024-07-26 01:16:01.089640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.731 [2024-07-26 01:16:01.089675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.731 [2024-07-26 01:16:01.089704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.731 [2024-07-26 01:16:01.107752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.731 [2024-07-26 01:16:01.107788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.731 [2024-07-26 01:16:01.107808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.731 [2024-07-26 01:16:01.126919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.731 [2024-07-26 01:16:01.126955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.731 [2024-07-26 01:16:01.126974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.731 [2024-07-26 01:16:01.145453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.731 [2024-07-26 01:16:01.145489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.731 [2024-07-26 01:16:01.145508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.989 [2024-07-26 01:16:01.165266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.989 [2024-07-26 01:16:01.165296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.989 [2024-07-26 01:16:01.165312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.989 [2024-07-26 01:16:01.184308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.989 [2024-07-26 01:16:01.184355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.989 [2024-07-26 01:16:01.184375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.989 [2024-07-26 01:16:01.201499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.989 [2024-07-26 01:16:01.201535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.989 [2024-07-26 01:16:01.201554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.989 [2024-07-26 01:16:01.219733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.989 [2024-07-26 01:16:01.219770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.989 [2024-07-26 01:16:01.219789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.989 [2024-07-26 01:16:01.234204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.989 [2024-07-26 01:16:01.234236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.989 [2024-07-26 01:16:01.234253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.989 [2024-07-26 01:16:01.252255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.989 [2024-07-26 01:16:01.252299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.989 [2024-07-26 01:16:01.252316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.989 [2024-07-26 01:16:01.277333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.989 [2024-07-26 01:16:01.277365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.989 [2024-07-26 01:16:01.277398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.989 [2024-07-26 01:16:01.292205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.990 [2024-07-26 01:16:01.292236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.990 [2024-07-26 01:16:01.292252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.990 [2024-07-26 01:16:01.312892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.990 [2024-07-26 01:16:01.312929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.990 [2024-07-26 01:16:01.312948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.990 [2024-07-26 01:16:01.332573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.990 [2024-07-26 01:16:01.332610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.990 [2024-07-26 01:16:01.332630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.990 [2024-07-26 01:16:01.346614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.990 [2024-07-26 01:16:01.346652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.990 [2024-07-26 01:16:01.346671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.990 [2024-07-26 01:16:01.363706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.990 [2024-07-26 01:16:01.363743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.990 [2024-07-26 01:16:01.363762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.990 [2024-07-26 01:16:01.383569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.990 [2024-07-26 01:16:01.383607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.990 [2024-07-26 01:16:01.383626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.990 [2024-07-26 01:16:01.401391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:30.990 [2024-07-26 01:16:01.401429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.990 [2024-07-26 01:16:01.401459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.248 [2024-07-26 01:16:01.417810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.248 [2024-07-26 01:16:01.417846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.248 [2024-07-26 01:16:01.417866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.248 [2024-07-26 01:16:01.431768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.248 [2024-07-26 01:16:01.431804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.248 [2024-07-26 01:16:01.431823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.248 [2024-07-26 01:16:01.450526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.248 [2024-07-26 01:16:01.450563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.248 [2024-07-26 01:16:01.450582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.248 [2024-07-26 01:16:01.470500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.248 [2024-07-26 01:16:01.470537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.248 [2024-07-26 01:16:01.470557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.248 [2024-07-26 01:16:01.488676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.248 [2024-07-26 01:16:01.488712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.248 [2024-07-26 01:16:01.488731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.248 [2024-07-26 01:16:01.507543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.248 [2024-07-26 01:16:01.507579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.248 [2024-07-26 01:16:01.507598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.248 [2024-07-26 01:16:01.525486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.248 [2024-07-26 01:16:01.525522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.248 [2024-07-26 01:16:01.525541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.248 [2024-07-26 01:16:01.539254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.248 [2024-07-26 01:16:01.539286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.248 [2024-07-26 01:16:01.539303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.248 [2024-07-26 01:16:01.558463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.248 [2024-07-26 01:16:01.558507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.248 [2024-07-26 01:16:01.558527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.248 [2024-07-26 01:16:01.578844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.248 [2024-07-26 01:16:01.578881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.248 [2024-07-26 01:16:01.578900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.248 [2024-07-26 01:16:01.598779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.248 [2024-07-26 01:16:01.598815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.248 [2024-07-26 01:16:01.598835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.248 [2024-07-26 01:16:01.618396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.248 [2024-07-26 01:16:01.618427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.248 [2024-07-26 01:16:01.618459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.248 [2024-07-26 01:16:01.633964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.248 [2024-07-26 01:16:01.633994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.248 [2024-07-26 01:16:01.634025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.248 [2024-07-26 01:16:01.646809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.248 [2024-07-26 01:16:01.646839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.248 [2024-07-26 01:16:01.646871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.248 [2024-07-26 01:16:01.663823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.248 [2024-07-26 01:16:01.663852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.248 [2024-07-26 01:16:01.663884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.506 [2024-07-26 01:16:01.681144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.506 [2024-07-26 01:16:01.681176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.506 [2024-07-26 01:16:01.681193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.506 [2024-07-26 01:16:01.700954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.506 [2024-07-26 01:16:01.700983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.506 [2024-07-26 01:16:01.700999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.507 [2024-07-26 01:16:01.718365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.507 [2024-07-26 01:16:01.718409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.507 [2024-07-26 01:16:01.718425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.507 [2024-07-26 01:16:01.735815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.507 [2024-07-26 01:16:01.735845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.507 [2024-07-26 01:16:01.735877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.507 [2024-07-26 01:16:01.752272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.532 [2024-07-26 01:16:01.752302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.532 [2024-07-26 01:16:01.752319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.532 [2024-07-26 01:16:01.765227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.532 [2024-07-26 01:16:01.765256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.532 [2024-07-26 01:16:01.765272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.532 [2024-07-26 01:16:01.783140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.532 [2024-07-26 01:16:01.783170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.532 [2024-07-26 01:16:01.783186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.532 [2024-07-26 01:16:01.803561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.532 [2024-07-26 01:16:01.803597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.532 [2024-07-26 01:16:01.803616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.532 [2024-07-26 01:16:01.816824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.532 [2024-07-26 01:16:01.816859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.532 [2024-07-26 01:16:01.816878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.532 [2024-07-26 01:16:01.835578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.532 [2024-07-26 01:16:01.835614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.532 [2024-07-26 01:16:01.835633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.532 [2024-07-26 01:16:01.855775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.532 [2024-07-26 01:16:01.855812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.532 [2024-07-26 01:16:01.855839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.532 [2024-07-26 01:16:01.875535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.532 [2024-07-26 01:16:01.875572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.532 [2024-07-26 01:16:01.875591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.532 [2024-07-26 01:16:01.895401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.532 [2024-07-26 01:16:01.895437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.532 [2024-07-26 01:16:01.895456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.532 [2024-07-26 01:16:01.912517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.532 [2024-07-26 01:16:01.912553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.532 [2024-07-26 01:16:01.912572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.532 [2024-07-26 01:16:01.928222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.532 [2024-07-26 01:16:01.928251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.532 [2024-07-26 01:16:01.928283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.791 [2024-07-26 01:16:01.947140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.791 [2024-07-26 01:16:01.947169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.791 [2024-07-26 01:16:01.947201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.791 [2024-07-26 01:16:01.967453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.791 [2024-07-26 01:16:01.967489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.791 [2024-07-26 01:16:01.967508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.791 [2024-07-26 01:16:01.988265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.791 [2024-07-26 01:16:01.988294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.791 [2024-07-26 01:16:01.988325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.791 [2024-07-26 01:16:02.006670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.791 [2024-07-26 01:16:02.006706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.791 [2024-07-26 01:16:02.006725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.791 [2024-07-26 01:16:02.024897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.791 [2024-07-26 01:16:02.024988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.791 [2024-07-26 01:16:02.025010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.791 [2024-07-26 01:16:02.039801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.791 [2024-07-26 01:16:02.039836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.791 [2024-07-26 01:16:02.039856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.791 [2024-07-26 01:16:02.058053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.791 [2024-07-26 01:16:02.058111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.791 [2024-07-26 01:16:02.058128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.791 [2024-07-26 01:16:02.078120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.791 [2024-07-26 01:16:02.078149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.791 [2024-07-26 01:16:02.078179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.791 [2024-07-26 01:16:02.096466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.791 [2024-07-26 01:16:02.096502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.791 [2024-07-26 01:16:02.096522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.791 [2024-07-26 01:16:02.114911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.791 [2024-07-26 01:16:02.114946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.791 [2024-07-26 01:16:02.114965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.791 [2024-07-26 01:16:02.129136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.791 [2024-07-26 01:16:02.129166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.791 [2024-07-26 01:16:02.129196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.791 [2024-07-26 01:16:02.146383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.791 [2024-07-26 01:16:02.146418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.791 [2024-07-26 01:16:02.146438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.791 [2024-07-26 01:16:02.165954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1397cc0) 00:33:31.791 [2024-07-26 01:16:02.165989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.791 [2024-07-26 01:16:02.166008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.791 00:33:31.791 Latency(us) 00:33:31.791 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:31.791 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:31.791 nvme0n1 : 2.00 14028.90 54.80 0.00 0.00 9114.32 3956.43 27962.03 00:33:31.791 =================================================================================================================== 00:33:31.791 Total : 14028.90 54.80 0.00 0.00 9114.32 3956.43 27962.03 00:33:31.791 0 00:33:31.791 01:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:31.791 01:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:31.791 01:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:31.791 01:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:31.791 | .driver_specific 00:33:31.791 | .nvme_error 00:33:31.791 | .status_code 00:33:31.791 | .command_transient_transport_error' 00:33:32.049 01:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 110 > 0 )) 00:33:32.049 01:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1976311 00:33:32.049 01:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1976311 ']' 00:33:32.049 01:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1976311 00:33:32.049 01:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:33:32.049 01:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:32.049 01:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1976311 00:33:32.049 01:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:32.049 01:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:32.049 01:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1976311' 00:33:32.049 killing process with pid 1976311 00:33:32.049 01:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1976311 00:33:32.049 Received shutdown signal, test time was about 2.000000 seconds 00:33:32.049 00:33:32.049 Latency(us) 00:33:32.049 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:32.049 =================================================================================================================== 00:33:32.049 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:32.049 01:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1976311 00:33:32.307 01:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:33:32.307 01:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:32.307 01:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:32.307 01:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:32.307 01:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:32.307 01:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1976724 00:33:32.307 01:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:33:32.307 01:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1976724 /var/tmp/bperf.sock 00:33:32.307 01:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1976724 ']' 00:33:32.307 01:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:32.307 01:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:32.307 01:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:32.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:32.307 01:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:32.308 01:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:32.308 [2024-07-26 01:16:02.726587] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:33:32.308 [2024-07-26 01:16:02.726663] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1976724 ] 00:33:32.308 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:32.308 Zero copy mechanism will not be used. 00:33:32.566 EAL: No free 2048 kB hugepages reported on node 1 00:33:32.566 [2024-07-26 01:16:02.789111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.566 [2024-07-26 01:16:02.880375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:32.824 01:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:32.824 01:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:33:32.824 01:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:32.824 01:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:33.080 01:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:33.080 01:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.080 01:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:33.080 01:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.080 01:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:33.080 01:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:33.336 nvme0n1 00:33:33.336 01:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:33.336 01:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.336 01:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:33.336 01:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.336 01:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:33.336 01:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:33.336 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:33.336 Zero copy mechanism will not be used. 00:33:33.336 Running I/O for 2 seconds... 00:33:33.336 [2024-07-26 01:16:03.734858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.336 [2024-07-26 01:16:03.734911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.336 [2024-07-26 01:16:03.734932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.336 [2024-07-26 01:16:03.741987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.336 [2024-07-26 01:16:03.742034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.336 [2024-07-26 01:16:03.742068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.336 [2024-07-26 01:16:03.749592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.336 [2024-07-26 01:16:03.749641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.336 [2024-07-26 01:16:03.749661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.336 [2024-07-26 01:16:03.757080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.336 [2024-07-26 01:16:03.757127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.336 [2024-07-26 01:16:03.757144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.594 [2024-07-26 01:16:03.764499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.594 [2024-07-26 01:16:03.764530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.594 [2024-07-26 01:16:03.764547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.594 [2024-07-26 01:16:03.771927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.594 [2024-07-26 01:16:03.771961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.594 [2024-07-26 01:16:03.771979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.594 [2024-07-26 01:16:03.779441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.594 [2024-07-26 01:16:03.779475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.594 [2024-07-26 01:16:03.779493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.594 [2024-07-26 01:16:03.786868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.594 [2024-07-26 01:16:03.786901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.594 [2024-07-26 01:16:03.786920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.594 [2024-07-26 01:16:03.794355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.594 [2024-07-26 01:16:03.794413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.594 [2024-07-26 01:16:03.794433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.594 [2024-07-26 01:16:03.801854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.594 [2024-07-26 01:16:03.801888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.594 [2024-07-26 01:16:03.801907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.594 [2024-07-26 01:16:03.809330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.594 [2024-07-26 01:16:03.809366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.594 [2024-07-26 01:16:03.809383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.594 [2024-07-26 01:16:03.816781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.594 [2024-07-26 01:16:03.816814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.594 [2024-07-26 01:16:03.816834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.594 [2024-07-26 01:16:03.824303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.594 [2024-07-26 01:16:03.824331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.594 [2024-07-26 01:16:03.824363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.594 [2024-07-26 01:16:03.831766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.594 [2024-07-26 01:16:03.831798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.594 [2024-07-26 01:16:03.831817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.594 [2024-07-26 01:16:03.839577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.594 [2024-07-26 01:16:03.839612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.594 [2024-07-26 01:16:03.839631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.594 [2024-07-26 01:16:03.847140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.594 [2024-07-26 01:16:03.847170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.594 [2024-07-26 01:16:03.847187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.594 [2024-07-26 01:16:03.854725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.594 [2024-07-26 01:16:03.854758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.594 [2024-07-26 01:16:03.854776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.594 [2024-07-26 01:16:03.863732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.594 [2024-07-26 01:16:03.863768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.594 [2024-07-26 01:16:03.863787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.594 [2024-07-26 01:16:03.873337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.594 [2024-07-26 01:16:03.873385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.594 [2024-07-26 01:16:03.873405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.594 [2024-07-26 01:16:03.883149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.594 [2024-07-26 01:16:03.883181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.594 [2024-07-26 01:16:03.883198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.594 [2024-07-26 01:16:03.892455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.594 [2024-07-26 01:16:03.892489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.594 [2024-07-26 01:16:03.892509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.594 [2024-07-26 01:16:03.902296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.594 [2024-07-26 01:16:03.902329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.594 [2024-07-26 01:16:03.902346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.594 [2024-07-26 01:16:03.911089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.594 [2024-07-26 01:16:03.911138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.594 [2024-07-26 01:16:03.911155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.594 [2024-07-26 01:16:03.920556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.594 [2024-07-26 01:16:03.920595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.594 [2024-07-26 01:16:03.920619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.594 [2024-07-26 01:16:03.929465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.594 [2024-07-26 01:16:03.929512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.594 [2024-07-26 01:16:03.929529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.594 [2024-07-26 01:16:03.939865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.594 [2024-07-26 01:16:03.939895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.594 [2024-07-26 01:16:03.939920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.594 [2024-07-26 01:16:03.950347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.594 [2024-07-26 01:16:03.950378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.594 [2024-07-26 01:16:03.950411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.594 [2024-07-26 01:16:03.959796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.594 [2024-07-26 01:16:03.959831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.594 [2024-07-26 01:16:03.959850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.594 [2024-07-26 01:16:03.969137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.595 [2024-07-26 01:16:03.969169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.595 [2024-07-26 01:16:03.969186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.595 [2024-07-26 01:16:03.978207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.595 [2024-07-26 01:16:03.978238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.595 [2024-07-26 01:16:03.978270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.595 [2024-07-26 01:16:03.987219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.595 [2024-07-26 01:16:03.987251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.595 [2024-07-26 01:16:03.987268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.595 [2024-07-26 01:16:03.996872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.595 [2024-07-26 01:16:03.996913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.595 [2024-07-26 01:16:03.996933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.595 [2024-07-26 01:16:04.006840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.595 [2024-07-26 01:16:04.006876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.595 [2024-07-26 01:16:04.006895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.595 [2024-07-26 01:16:04.016812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.595 [2024-07-26 01:16:04.016848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.595 [2024-07-26 01:16:04.016867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.853 [2024-07-26 01:16:04.027367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.853 [2024-07-26 01:16:04.027414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.853 [2024-07-26 01:16:04.027430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.853 [2024-07-26 01:16:04.036943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.853 [2024-07-26 01:16:04.036978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.853 [2024-07-26 01:16:04.036998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.853 [2024-07-26 01:16:04.047328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.853 [2024-07-26 01:16:04.047380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.853 [2024-07-26 01:16:04.047399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.853 [2024-07-26 01:16:04.056985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.853 [2024-07-26 01:16:04.057021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.853 [2024-07-26 01:16:04.057041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.853 [2024-07-26 01:16:04.065961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.853 [2024-07-26 01:16:04.065996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.853 [2024-07-26 01:16:04.066015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.853 [2024-07-26 01:16:04.075747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.853 [2024-07-26 01:16:04.075784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.853 [2024-07-26 01:16:04.075803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.853 [2024-07-26 01:16:04.084593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.853 [2024-07-26 01:16:04.084628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.853 [2024-07-26 01:16:04.084647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.853 [2024-07-26 01:16:04.093086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.853 [2024-07-26 01:16:04.093137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.853 [2024-07-26 01:16:04.093154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.853 [2024-07-26 01:16:04.100674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.853 [2024-07-26 01:16:04.100707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.853 [2024-07-26 01:16:04.100732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.853 [2024-07-26 01:16:04.108202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.853 [2024-07-26 01:16:04.108234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.853 [2024-07-26 01:16:04.108250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.853 [2024-07-26 01:16:04.115749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.853 [2024-07-26 01:16:04.115782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.853 [2024-07-26 01:16:04.115801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.853 [2024-07-26 01:16:04.123326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.853 [2024-07-26 01:16:04.123359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.853 [2024-07-26 01:16:04.123377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.853 [2024-07-26 01:16:04.130901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.853 [2024-07-26 01:16:04.130934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.853 [2024-07-26 01:16:04.130953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.853 [2024-07-26 01:16:04.138354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.853 [2024-07-26 01:16:04.138400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.853 [2024-07-26 01:16:04.138418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.853 [2024-07-26 01:16:04.145964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.853 [2024-07-26 01:16:04.145998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.853 [2024-07-26 01:16:04.146016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.853 [2024-07-26 01:16:04.153510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.853 [2024-07-26 01:16:04.153543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.853 [2024-07-26 01:16:04.153560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.853 [2024-07-26 01:16:04.160956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.853 [2024-07-26 01:16:04.160989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.853 [2024-07-26 01:16:04.161006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.853 [2024-07-26 01:16:04.169713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.853 [2024-07-26 01:16:04.169752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.853 [2024-07-26 01:16:04.169772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.853 [2024-07-26 01:16:04.179257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.853 [2024-07-26 01:16:04.179289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.853 [2024-07-26 01:16:04.179306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.853 [2024-07-26 01:16:04.187962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.853 [2024-07-26 01:16:04.187996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.854 [2024-07-26 01:16:04.188015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.854 [2024-07-26 01:16:04.195742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.854 [2024-07-26 01:16:04.195776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.854 [2024-07-26 01:16:04.195795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.854 [2024-07-26 01:16:04.203361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.854 [2024-07-26 01:16:04.203395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.854 [2024-07-26 01:16:04.203414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.854 [2024-07-26 01:16:04.210727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.854 [2024-07-26 01:16:04.210760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.854 [2024-07-26 01:16:04.210778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.854 [2024-07-26 01:16:04.218172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.854 [2024-07-26 01:16:04.218205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.854 [2024-07-26 01:16:04.218223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.854 [2024-07-26 01:16:04.225570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.854 [2024-07-26 01:16:04.225603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.854 [2024-07-26 01:16:04.225621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.854 [2024-07-26 01:16:04.232937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.854 [2024-07-26 01:16:04.232970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.854 [2024-07-26 01:16:04.232988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.854 [2024-07-26 01:16:04.240432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.854 [2024-07-26 01:16:04.240465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.854 [2024-07-26 01:16:04.240483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.854 [2024-07-26 01:16:04.247918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.854 [2024-07-26 01:16:04.247954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.854 [2024-07-26 01:16:04.247972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.854 [2024-07-26 01:16:04.255319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.854 [2024-07-26 01:16:04.255354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.854 [2024-07-26 01:16:04.255372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.854 [2024-07-26 01:16:04.262715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.854 [2024-07-26 01:16:04.262748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.854 [2024-07-26 01:16:04.262766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.854 [2024-07-26 01:16:04.270087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.854 [2024-07-26 01:16:04.270120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.854 [2024-07-26 01:16:04.270138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.854 [2024-07-26 01:16:04.277421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:33.854 [2024-07-26 01:16:04.277453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.854 [2024-07-26 01:16:04.277472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.112 [2024-07-26 01:16:04.284881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.112 [2024-07-26 01:16:04.284913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.112 [2024-07-26 01:16:04.284931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.112 [2024-07-26 01:16:04.292351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.112 [2024-07-26 01:16:04.292385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.112 [2024-07-26 01:16:04.292403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.112 [2024-07-26 01:16:04.299787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.112 [2024-07-26 01:16:04.299822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.112 [2024-07-26 01:16:04.299846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.112 [2024-07-26 01:16:04.307184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.112 [2024-07-26 01:16:04.307219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.112 [2024-07-26 01:16:04.307238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.112 [2024-07-26 01:16:04.314598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.112 [2024-07-26 01:16:04.314633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.112 [2024-07-26 01:16:04.314652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.112 [2024-07-26 01:16:04.322139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.112 [2024-07-26 01:16:04.322172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.112 [2024-07-26 01:16:04.322190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.112 [2024-07-26 01:16:04.329535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.112 [2024-07-26 01:16:04.329568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.112 [2024-07-26 01:16:04.329587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.112 [2024-07-26 01:16:04.336992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.112 [2024-07-26 01:16:04.337025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.112 [2024-07-26 01:16:04.337043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.112 [2024-07-26 01:16:04.344419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.112 [2024-07-26 01:16:04.344452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.112 [2024-07-26 01:16:04.344470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.112 [2024-07-26 01:16:04.351841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.112 [2024-07-26 01:16:04.351873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.112 [2024-07-26 01:16:04.351891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.112 [2024-07-26 01:16:04.359306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.112 [2024-07-26 01:16:04.359339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.112 [2024-07-26 01:16:04.359357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.112 [2024-07-26 01:16:04.366731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.112 [2024-07-26 01:16:04.366770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.112 [2024-07-26 01:16:04.366789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.112 [2024-07-26 01:16:04.374102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.112 [2024-07-26 01:16:04.374135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.112 [2024-07-26 01:16:04.374153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.112 [2024-07-26 01:16:04.381450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.112 [2024-07-26 01:16:04.381483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.113 [2024-07-26 01:16:04.381502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.113 [2024-07-26 01:16:04.388785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.113 [2024-07-26 01:16:04.388817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.113 [2024-07-26 01:16:04.388835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.113 [2024-07-26 01:16:04.396167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.113 [2024-07-26 01:16:04.396200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.113 [2024-07-26 01:16:04.396218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.113 [2024-07-26 01:16:04.403546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.113 [2024-07-26 01:16:04.403579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.113 [2024-07-26 01:16:04.403597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.113 [2024-07-26 01:16:04.410946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.113 [2024-07-26 01:16:04.410978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.113 [2024-07-26 01:16:04.410996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.113 [2024-07-26 01:16:04.418383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.113 [2024-07-26 01:16:04.418415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.113 [2024-07-26 01:16:04.418433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.113 [2024-07-26 01:16:04.425868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.113 [2024-07-26 01:16:04.425901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.113 [2024-07-26 01:16:04.425918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.113 [2024-07-26 01:16:04.433254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.113 [2024-07-26 01:16:04.433287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.113 [2024-07-26 01:16:04.433305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.113 [2024-07-26 01:16:04.440655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.113 [2024-07-26 01:16:04.440688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.113 [2024-07-26 01:16:04.440705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.113 [2024-07-26 01:16:04.448132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.113 [2024-07-26 01:16:04.448164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.113 [2024-07-26 01:16:04.448183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.113 [2024-07-26 01:16:04.455525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.113 [2024-07-26 01:16:04.455557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.113 [2024-07-26 01:16:04.455576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.113 [2024-07-26 01:16:04.462945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.113 [2024-07-26 01:16:04.462977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.113 [2024-07-26 01:16:04.462995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.113 [2024-07-26 01:16:04.470553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.113 [2024-07-26 01:16:04.470585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.113 [2024-07-26 01:16:04.470602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.113 [2024-07-26 01:16:04.478043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.113 [2024-07-26 01:16:04.478081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.113 [2024-07-26 01:16:04.478100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.113 [2024-07-26 01:16:04.485435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.113 [2024-07-26 01:16:04.485468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.113 [2024-07-26 01:16:04.485486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.113 [2024-07-26 01:16:04.492900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.113 [2024-07-26 01:16:04.492932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.113 [2024-07-26 01:16:04.492956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.113 [2024-07-26 01:16:04.500467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.113 [2024-07-26 01:16:04.500502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.113 [2024-07-26 01:16:04.500521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.113 [2024-07-26 01:16:04.507889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.113 [2024-07-26 01:16:04.507923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.113 [2024-07-26 01:16:04.507942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.113 [2024-07-26 01:16:04.515290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.113 [2024-07-26 01:16:04.515324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.113 [2024-07-26 01:16:04.515342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.113 [2024-07-26 01:16:04.522730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.113 [2024-07-26 01:16:04.522763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.113 [2024-07-26 01:16:04.522781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.113 [2024-07-26 01:16:04.530108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.113 [2024-07-26 01:16:04.530140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.113 [2024-07-26 01:16:04.530158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.113 [2024-07-26 01:16:04.537431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.113 [2024-07-26 01:16:04.537464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.113 [2024-07-26 01:16:04.537482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.371 [2024-07-26 01:16:04.544896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.371 [2024-07-26 01:16:04.544929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.371 [2024-07-26 01:16:04.544947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.371 [2024-07-26 01:16:04.552340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.371 [2024-07-26 01:16:04.552373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.371 [2024-07-26 01:16:04.552390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.371 [2024-07-26 01:16:04.559715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.371 [2024-07-26 01:16:04.559753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.371 [2024-07-26 01:16:04.559771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.371 [2024-07-26 01:16:04.567107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.371 [2024-07-26 01:16:04.567140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.371 [2024-07-26 01:16:04.567158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.371 [2024-07-26 01:16:04.574505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.371 [2024-07-26 01:16:04.574537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.371 [2024-07-26 01:16:04.574555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.371 [2024-07-26 01:16:04.581909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.371 [2024-07-26 01:16:04.581941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.372 [2024-07-26 01:16:04.581959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.372 [2024-07-26 01:16:04.589324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.372 [2024-07-26 01:16:04.589356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.372 [2024-07-26 01:16:04.589374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.372 [2024-07-26 01:16:04.596684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.372 [2024-07-26 01:16:04.596724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.372 [2024-07-26 01:16:04.596743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.372 [2024-07-26 01:16:04.604170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.372 [2024-07-26 01:16:04.604204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.372 [2024-07-26 01:16:04.604222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.372 [2024-07-26 01:16:04.611534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.372 [2024-07-26 01:16:04.611567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.372 [2024-07-26 01:16:04.611585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.372 [2024-07-26 01:16:04.618843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.372 [2024-07-26 01:16:04.618875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.372 [2024-07-26 01:16:04.618893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.372 [2024-07-26 01:16:04.626285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.372 [2024-07-26 01:16:04.626319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.372 [2024-07-26 01:16:04.626337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.372 [2024-07-26 01:16:04.633778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.372 [2024-07-26 01:16:04.633810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.372 [2024-07-26 01:16:04.633828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.372 [2024-07-26 01:16:04.641270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.372 [2024-07-26 01:16:04.641302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.372 [2024-07-26 01:16:04.641320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.372 [2024-07-26 01:16:04.648751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.372 [2024-07-26 01:16:04.648784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.372 [2024-07-26 01:16:04.648802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.372 [2024-07-26 01:16:04.656333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.372 [2024-07-26 01:16:04.656365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.372 [2024-07-26 01:16:04.656383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.372 [2024-07-26 01:16:04.663807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.372 [2024-07-26 01:16:04.663839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.372 [2024-07-26 01:16:04.663857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.372 [2024-07-26 01:16:04.671229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.372 [2024-07-26 01:16:04.671262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.372 [2024-07-26 01:16:04.671280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.372 [2024-07-26 01:16:04.678613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.372 [2024-07-26 01:16:04.678645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.372 [2024-07-26 01:16:04.678663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.372 [2024-07-26 01:16:04.685985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.372 [2024-07-26 01:16:04.686017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.372 [2024-07-26 01:16:04.686042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.372 [2024-07-26 01:16:04.693411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.372 [2024-07-26 01:16:04.693444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.372 [2024-07-26 01:16:04.693462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.372 [2024-07-26 01:16:04.700816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.372 [2024-07-26 01:16:04.700849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.372 [2024-07-26 01:16:04.700867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.372 [2024-07-26 01:16:04.708300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.372 [2024-07-26 01:16:04.708333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.372 [2024-07-26 01:16:04.708351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.372 [2024-07-26 01:16:04.715674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.372 [2024-07-26 01:16:04.715706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.372 [2024-07-26 01:16:04.715723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.372 [2024-07-26 01:16:04.723069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.372 [2024-07-26 01:16:04.723101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.372 [2024-07-26 01:16:04.723119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.372 [2024-07-26 01:16:04.730435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.372 [2024-07-26 01:16:04.730469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.372 [2024-07-26 01:16:04.730486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.372 [2024-07-26 01:16:04.737804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.372 [2024-07-26 01:16:04.737838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.372 [2024-07-26 01:16:04.737856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.372 [2024-07-26 01:16:04.745348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.372 [2024-07-26 01:16:04.745383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.372 [2024-07-26 01:16:04.745402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.372 [2024-07-26 01:16:04.752836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.372 [2024-07-26 01:16:04.752878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.372 [2024-07-26 01:16:04.752898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.372 [2024-07-26 01:16:04.760367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.372 [2024-07-26 01:16:04.760401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.372 [2024-07-26 01:16:04.760419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.372 [2024-07-26 01:16:04.767742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.372 [2024-07-26 01:16:04.767776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.372 [2024-07-26 01:16:04.767794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.372 [2024-07-26 01:16:04.775244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.372 [2024-07-26 01:16:04.775281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.372 [2024-07-26 01:16:04.775300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.372 [2024-07-26 01:16:04.782574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.373 [2024-07-26 01:16:04.782607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.373 [2024-07-26 01:16:04.782626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.373 [2024-07-26 01:16:04.789946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.373 [2024-07-26 01:16:04.789980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.373 [2024-07-26 01:16:04.789998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.373 [2024-07-26 01:16:04.797301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.373 [2024-07-26 01:16:04.797334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.373 [2024-07-26 01:16:04.797352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.631 [2024-07-26 01:16:04.804734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.631 [2024-07-26 01:16:04.804768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.631 [2024-07-26 01:16:04.804786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.631 [2024-07-26 01:16:04.812139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.631 [2024-07-26 01:16:04.812171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.631 [2024-07-26 01:16:04.812189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.631 [2024-07-26 01:16:04.820205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.631 [2024-07-26 01:16:04.820241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.631 [2024-07-26 01:16:04.820260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.631 [2024-07-26 01:16:04.829433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.631 [2024-07-26 01:16:04.829469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.631 [2024-07-26 01:16:04.829487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.631 [2024-07-26 01:16:04.838481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.631 [2024-07-26 01:16:04.838517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.631 [2024-07-26 01:16:04.838535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.631 [2024-07-26 01:16:04.846724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.631 [2024-07-26 01:16:04.846759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.631 [2024-07-26 01:16:04.846777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.631 [2024-07-26 01:16:04.854234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.631 [2024-07-26 01:16:04.854268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.631 [2024-07-26 01:16:04.854286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.631 [2024-07-26 01:16:04.861706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.631 [2024-07-26 01:16:04.861740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.631 [2024-07-26 01:16:04.861759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.631 [2024-07-26 01:16:04.869041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.631 [2024-07-26 01:16:04.869083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.631 [2024-07-26 01:16:04.869102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.631 [2024-07-26 01:16:04.876791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.631 [2024-07-26 01:16:04.876827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.631 [2024-07-26 01:16:04.876846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.631 [2024-07-26 01:16:04.885003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.631 [2024-07-26 01:16:04.885037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.631 [2024-07-26 01:16:04.885072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.631 [2024-07-26 01:16:04.892973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.631 [2024-07-26 01:16:04.893007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.631 [2024-07-26 01:16:04.893026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.631 [2024-07-26 01:16:04.900944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.631 [2024-07-26 01:16:04.900977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.631 [2024-07-26 01:16:04.900996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.631 [2024-07-26 01:16:04.907711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.631 [2024-07-26 01:16:04.907744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.631 [2024-07-26 01:16:04.907762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.631 [2024-07-26 01:16:04.915233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.631 [2024-07-26 01:16:04.915265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.631 [2024-07-26 01:16:04.915284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.631 [2024-07-26 01:16:04.922745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.631 [2024-07-26 01:16:04.922779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.631 [2024-07-26 01:16:04.922797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.631 [2024-07-26 01:16:04.930218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.631 [2024-07-26 01:16:04.930252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.631 [2024-07-26 01:16:04.930270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.631 [2024-07-26 01:16:04.937624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.631 [2024-07-26 01:16:04.937658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.631 [2024-07-26 01:16:04.937676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.631 [2024-07-26 01:16:04.945082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.631 [2024-07-26 01:16:04.945125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.631 [2024-07-26 01:16:04.945143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.631 [2024-07-26 01:16:04.952532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.631 [2024-07-26 01:16:04.952573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.631 [2024-07-26 01:16:04.952593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.631 [2024-07-26 01:16:04.959918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.631 [2024-07-26 01:16:04.959951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.631 [2024-07-26 01:16:04.959969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.631 [2024-07-26 01:16:04.967255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.631 [2024-07-26 01:16:04.967289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.631 [2024-07-26 01:16:04.967307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.631 [2024-07-26 01:16:04.974740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.631 [2024-07-26 01:16:04.974774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.632 [2024-07-26 01:16:04.974792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.632 [2024-07-26 01:16:04.982089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.632 [2024-07-26 01:16:04.982121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.632 [2024-07-26 01:16:04.982140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.632 [2024-07-26 01:16:04.989564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.632 [2024-07-26 01:16:04.989597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.632 [2024-07-26 01:16:04.989614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.632 [2024-07-26 01:16:04.997100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.632 [2024-07-26 01:16:04.997131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.632 [2024-07-26 01:16:04.997149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.632 [2024-07-26 01:16:05.004568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.632 [2024-07-26 01:16:05.004603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.632 [2024-07-26 01:16:05.004621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.632 [2024-07-26 01:16:05.011878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.632 [2024-07-26 01:16:05.011912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.632 [2024-07-26 01:16:05.011930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.632 [2024-07-26 01:16:05.019201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.632 [2024-07-26 01:16:05.019234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.632 [2024-07-26 01:16:05.019252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.632 [2024-07-26 01:16:05.026631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.632 [2024-07-26 01:16:05.026664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.632 [2024-07-26 01:16:05.026681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.632 [2024-07-26 01:16:05.033960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.632 [2024-07-26 01:16:05.033994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.632 [2024-07-26 01:16:05.034012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.632 [2024-07-26 01:16:05.041316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.632 [2024-07-26 01:16:05.041349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.632 [2024-07-26 01:16:05.041366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.632 [2024-07-26 01:16:05.048998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.632 [2024-07-26 01:16:05.049032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.632 [2024-07-26 01:16:05.049050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.632 [2024-07-26 01:16:05.056416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.632 [2024-07-26 01:16:05.056449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.632 [2024-07-26 01:16:05.056467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.890 [2024-07-26 01:16:05.063877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.890 [2024-07-26 01:16:05.063911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.890 [2024-07-26 01:16:05.063929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.890 [2024-07-26 01:16:05.071314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.890 [2024-07-26 01:16:05.071347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.890 [2024-07-26 01:16:05.071365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.890 [2024-07-26 01:16:05.078690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.890 [2024-07-26 01:16:05.078723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.890 [2024-07-26 01:16:05.078747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.890 [2024-07-26 01:16:05.086356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.890 [2024-07-26 01:16:05.086391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.890 [2024-07-26 01:16:05.086409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.890 [2024-07-26 01:16:05.093779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.890 [2024-07-26 01:16:05.093815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.890 [2024-07-26 01:16:05.093833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.890 [2024-07-26 01:16:05.101392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.890 [2024-07-26 01:16:05.101427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.890 [2024-07-26 01:16:05.101445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.890 [2024-07-26 01:16:05.108862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.890 [2024-07-26 01:16:05.108895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.891 [2024-07-26 01:16:05.108913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.891 [2024-07-26 01:16:05.116273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.891 [2024-07-26 01:16:05.116306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.891 [2024-07-26 01:16:05.116323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.891 [2024-07-26 01:16:05.123587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.891 [2024-07-26 01:16:05.123620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.891 [2024-07-26 01:16:05.123637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.891 [2024-07-26 01:16:05.130905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.891 [2024-07-26 01:16:05.130936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.891 [2024-07-26 01:16:05.130954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.891 [2024-07-26 01:16:05.138258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.891 [2024-07-26 01:16:05.138291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.891 [2024-07-26 01:16:05.138309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.891 [2024-07-26 01:16:05.145699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.891 [2024-07-26 01:16:05.145731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.891 [2024-07-26 01:16:05.145749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.891 [2024-07-26 01:16:05.153533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.891 [2024-07-26 01:16:05.153567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.891 [2024-07-26 01:16:05.153585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.891 [2024-07-26 01:16:05.160871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.891 [2024-07-26 01:16:05.160905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.891 [2024-07-26 01:16:05.160923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.891 [2024-07-26 01:16:05.168221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.891 [2024-07-26 01:16:05.168253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.891 [2024-07-26 01:16:05.168271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.891 [2024-07-26 01:16:05.175583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.891 [2024-07-26 01:16:05.175616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.891 [2024-07-26 01:16:05.175634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.891 [2024-07-26 01:16:05.182988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.891 [2024-07-26 01:16:05.183020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.891 [2024-07-26 01:16:05.183037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.891 [2024-07-26 01:16:05.190306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.891 [2024-07-26 01:16:05.190338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.891 [2024-07-26 01:16:05.190356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.891 [2024-07-26 01:16:05.197620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.891 [2024-07-26 01:16:05.197653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.891 [2024-07-26 01:16:05.197671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.891 [2024-07-26 01:16:05.204925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.891 [2024-07-26 01:16:05.204958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.891 [2024-07-26 01:16:05.204982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.891 [2024-07-26 01:16:05.212297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.891 [2024-07-26 01:16:05.212329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.891 [2024-07-26 01:16:05.212347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.891 [2024-07-26 01:16:05.219630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.891 [2024-07-26 01:16:05.219663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.891 [2024-07-26 01:16:05.219681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.891 [2024-07-26 01:16:05.227149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.891 [2024-07-26 01:16:05.227182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.891 [2024-07-26 01:16:05.227200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.891 [2024-07-26 01:16:05.234518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.891 [2024-07-26 01:16:05.234551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.891 [2024-07-26 01:16:05.234569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.891 [2024-07-26 01:16:05.241930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.891 [2024-07-26 01:16:05.241965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.891 [2024-07-26 01:16:05.241983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.891 [2024-07-26 01:16:05.249349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.891 [2024-07-26 01:16:05.249381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.891 [2024-07-26 01:16:05.249399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.891 [2024-07-26 01:16:05.256789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.891 [2024-07-26 01:16:05.256824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.891 [2024-07-26 01:16:05.256842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.891 [2024-07-26 01:16:05.264199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.891 [2024-07-26 01:16:05.264233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.891 [2024-07-26 01:16:05.264252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.891 [2024-07-26 01:16:05.271536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.891 [2024-07-26 01:16:05.271574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.891 [2024-07-26 01:16:05.271593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.891 [2024-07-26 01:16:05.278846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.891 [2024-07-26 01:16:05.278879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.891 [2024-07-26 01:16:05.278897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.891 [2024-07-26 01:16:05.286211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.891 [2024-07-26 01:16:05.286244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.891 [2024-07-26 01:16:05.286262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.891 [2024-07-26 01:16:05.293673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.891 [2024-07-26 01:16:05.293706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.891 [2024-07-26 01:16:05.293724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.891 [2024-07-26 01:16:05.301028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.891 [2024-07-26 01:16:05.301068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.891 [2024-07-26 01:16:05.301088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.892 [2024-07-26 01:16:05.308420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.892 [2024-07-26 01:16:05.308452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.892 [2024-07-26 01:16:05.308470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.892 [2024-07-26 01:16:05.315860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:34.892 [2024-07-26 01:16:05.315892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.892 [2024-07-26 01:16:05.315910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:35.149 [2024-07-26 01:16:05.323125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.149 [2024-07-26 01:16:05.323158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.149 [2024-07-26 01:16:05.323177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:35.149 [2024-07-26 01:16:05.330409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.149 [2024-07-26 01:16:05.330442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.149 [2024-07-26 01:16:05.330459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:35.149 [2024-07-26 01:16:05.337780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.149 [2024-07-26 01:16:05.337813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.149 [2024-07-26 01:16:05.337831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.150 [2024-07-26 01:16:05.345173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.150 [2024-07-26 01:16:05.345206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.150 [2024-07-26 01:16:05.345224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:35.150 [2024-07-26 01:16:05.352474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.150 [2024-07-26 01:16:05.352505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.150 [2024-07-26 01:16:05.352523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:35.150 [2024-07-26 01:16:05.359846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.150 [2024-07-26 01:16:05.359879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.150 [2024-07-26 01:16:05.359897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:35.150 [2024-07-26 01:16:05.367234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.150 [2024-07-26 01:16:05.367266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.150 [2024-07-26 01:16:05.367284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.150 [2024-07-26 01:16:05.374793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.150 [2024-07-26 01:16:05.374826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.150 [2024-07-26 01:16:05.374844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:35.150 [2024-07-26 01:16:05.382251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.150 [2024-07-26 01:16:05.382283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.150 [2024-07-26 01:16:05.382301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:35.150 [2024-07-26 01:16:05.389710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.150 [2024-07-26 01:16:05.389742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.150 [2024-07-26 01:16:05.389760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:35.150 [2024-07-26 01:16:05.397258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.150 [2024-07-26 01:16:05.397291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.150 [2024-07-26 01:16:05.397315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.150 [2024-07-26 01:16:05.404771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.150 [2024-07-26 01:16:05.404803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.150 [2024-07-26 01:16:05.404820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:35.150 [2024-07-26 01:16:05.412081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.150 [2024-07-26 01:16:05.412117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.150 [2024-07-26 01:16:05.412135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:35.150 [2024-07-26 01:16:05.419436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.150 [2024-07-26 01:16:05.419468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.150 [2024-07-26 01:16:05.419486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:35.150 [2024-07-26 01:16:05.426845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.150 [2024-07-26 01:16:05.426877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.150 [2024-07-26 01:16:05.426895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.150 [2024-07-26 01:16:05.435396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.150 [2024-07-26 01:16:05.435430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.150 [2024-07-26 01:16:05.435449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:35.150 [2024-07-26 01:16:05.445173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.150 [2024-07-26 01:16:05.445207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.150 [2024-07-26 01:16:05.445225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:35.150 [2024-07-26 01:16:05.454906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.150 [2024-07-26 01:16:05.454940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.150 [2024-07-26 01:16:05.454959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:35.150 [2024-07-26 01:16:05.464484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.150 [2024-07-26 01:16:05.464518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.150 [2024-07-26 01:16:05.464537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.150 [2024-07-26 01:16:05.474180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.150 [2024-07-26 01:16:05.474220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.150 [2024-07-26 01:16:05.474240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:35.150 [2024-07-26 01:16:05.484398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.150 [2024-07-26 01:16:05.484433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.150 [2024-07-26 01:16:05.484452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:35.150 [2024-07-26 01:16:05.494304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.150 [2024-07-26 01:16:05.494339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.150 [2024-07-26 01:16:05.494358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:35.150 [2024-07-26 01:16:05.503994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.150 [2024-07-26 01:16:05.504029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.150 [2024-07-26 01:16:05.504047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.150 [2024-07-26 01:16:05.509307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.150 [2024-07-26 01:16:05.509343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.150 [2024-07-26 01:16:05.509362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:35.150 [2024-07-26 01:16:05.518841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.150 [2024-07-26 01:16:05.518876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.150 [2024-07-26 01:16:05.518895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:35.150 [2024-07-26 01:16:05.527364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.150 [2024-07-26 01:16:05.527400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.150 [2024-07-26 01:16:05.527419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:35.150 [2024-07-26 01:16:05.536909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.150 [2024-07-26 01:16:05.536944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.150 [2024-07-26 01:16:05.536963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.150 [2024-07-26 01:16:05.546003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.150 [2024-07-26 01:16:05.546037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.150 [2024-07-26 01:16:05.546056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:35.150 [2024-07-26 01:16:05.554079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.150 [2024-07-26 01:16:05.554113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.150 [2024-07-26 01:16:05.554132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:35.151 [2024-07-26 01:16:05.562700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.151 [2024-07-26 01:16:05.562735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.151 [2024-07-26 01:16:05.562754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:35.151 [2024-07-26 01:16:05.570293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.151 [2024-07-26 01:16:05.570326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.151 [2024-07-26 01:16:05.570344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.409 [2024-07-26 01:16:05.577672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.409 [2024-07-26 01:16:05.577706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.409 [2024-07-26 01:16:05.577724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:35.409 [2024-07-26 01:16:05.585073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.409 [2024-07-26 01:16:05.585105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.409 [2024-07-26 01:16:05.585123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:35.409 [2024-07-26 01:16:05.592585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.409 [2024-07-26 01:16:05.592619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.409 [2024-07-26 01:16:05.592637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:35.409 [2024-07-26 01:16:05.599927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.409 [2024-07-26 01:16:05.599960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.409 [2024-07-26 01:16:05.599978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.409 [2024-07-26 01:16:05.607339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.409 [2024-07-26 01:16:05.607372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.409 [2024-07-26 01:16:05.607390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:35.409 [2024-07-26 01:16:05.614795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.409 [2024-07-26 01:16:05.614828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.409 [2024-07-26 01:16:05.614853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:35.409 [2024-07-26 01:16:05.622210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.409 [2024-07-26 01:16:05.622243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.409 [2024-07-26 01:16:05.622261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:35.409 [2024-07-26 01:16:05.629627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.409 [2024-07-26 01:16:05.629660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.409 [2024-07-26 01:16:05.629677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.409 [2024-07-26 01:16:05.637159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.409 [2024-07-26 01:16:05.637192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.409 [2024-07-26 01:16:05.637210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:35.409 [2024-07-26 01:16:05.644689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.409 [2024-07-26 01:16:05.644722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.409 [2024-07-26 01:16:05.644739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:35.409 [2024-07-26 01:16:05.651990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.409 [2024-07-26 01:16:05.652022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.409 [2024-07-26 01:16:05.652041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:35.409 [2024-07-26 01:16:05.659461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.409 [2024-07-26 01:16:05.659493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.409 [2024-07-26 01:16:05.659511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.409 [2024-07-26 01:16:05.666843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.409 [2024-07-26 01:16:05.666876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.409 [2024-07-26 01:16:05.666894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:35.409 [2024-07-26 01:16:05.674433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.409 [2024-07-26 01:16:05.674466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.409 [2024-07-26 01:16:05.674484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:35.409 [2024-07-26 01:16:05.681881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.409 [2024-07-26 01:16:05.681920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.410 [2024-07-26 01:16:05.681938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:35.410 [2024-07-26 01:16:05.689305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.410 [2024-07-26 01:16:05.689338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.410 [2024-07-26 01:16:05.689356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.410 [2024-07-26 01:16:05.696759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.410 [2024-07-26 01:16:05.696792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.410 [2024-07-26 01:16:05.696810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:35.410 [2024-07-26 01:16:05.704233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.410 [2024-07-26 01:16:05.704265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.410 [2024-07-26 01:16:05.704284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:35.410 [2024-07-26 01:16:05.711629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.410 [2024-07-26 01:16:05.711661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.410 [2024-07-26 01:16:05.711679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:35.410 [2024-07-26 01:16:05.719071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.410 [2024-07-26 01:16:05.719103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.410 [2024-07-26 01:16:05.719121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.410 [2024-07-26 01:16:05.726568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.410 [2024-07-26 01:16:05.726600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.410 [2024-07-26 01:16:05.726618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:35.410 [2024-07-26 01:16:05.733941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa266b0) 00:33:35.410 [2024-07-26 01:16:05.733972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.410 [2024-07-26 01:16:05.733991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:35.410 00:33:35.410 Latency(us) 00:33:35.410 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:35.410 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:35.410 nvme0n1 : 2.00 3995.11 499.39 0.00 0.00 3999.27 752.45 10631.40 00:33:35.410 =================================================================================================================== 00:33:35.410 Total : 3995.11 499.39 0.00 0.00 3999.27 752.45 10631.40 00:33:35.410 0 00:33:35.410 01:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:35.410 01:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:35.410 01:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:35.410 01:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:35.410 | .driver_specific 00:33:35.410 | .nvme_error 00:33:35.410 | .status_code 00:33:35.410 | .command_transient_transport_error' 00:33:35.668 01:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 258 > 0 )) 00:33:35.668 01:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1976724 00:33:35.668 01:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1976724 ']' 00:33:35.668 01:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1976724 00:33:35.668 01:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:33:35.668 01:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:35.668 01:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1976724 00:33:35.668 01:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:35.668 01:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:35.668 01:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1976724' 00:33:35.668 killing process with pid 1976724 00:33:35.668 01:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1976724 00:33:35.668 Received shutdown signal, test time was about 2.000000 seconds 00:33:35.668 00:33:35.668 Latency(us) 00:33:35.668 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:35.668 =================================================================================================================== 00:33:35.668 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:35.668 01:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1976724 00:33:35.925 01:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:33:35.925 01:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:35.925 01:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:35.925 01:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:35.925 01:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:35.926 01:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1977128 00:33:35.926 01:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:33:35.926 01:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1977128 /var/tmp/bperf.sock 00:33:35.926 01:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1977128 ']' 00:33:35.926 01:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:35.926 01:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:35.926 01:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:35.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:35.926 01:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:35.926 01:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:35.926 [2024-07-26 01:16:06.298364] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:33:35.926 [2024-07-26 01:16:06.298441] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1977128 ] 00:33:35.926 EAL: No free 2048 kB hugepages reported on node 1 00:33:36.183 [2024-07-26 01:16:06.361303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:36.183 [2024-07-26 01:16:06.460897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:36.183 01:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:36.183 01:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:33:36.183 01:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:36.183 01:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:36.441 01:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:36.441 01:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.441 01:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:36.441 01:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.441 01:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:36.441 01:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:37.007 nvme0n1 00:33:37.007 01:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:37.007 01:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.007 01:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:37.007 01:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.007 01:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:37.007 01:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:37.266 Running I/O for 2 seconds... 00:33:37.266 [2024-07-26 01:16:07.454104] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ee5c8 00:33:37.266 [2024-07-26 01:16:07.455009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.266 [2024-07-26 01:16:07.455067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:37.266 [2024-07-26 01:16:07.465326] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190fac10 00:33:37.266 [2024-07-26 01:16:07.466215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.266 [2024-07-26 01:16:07.466244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:37.266 [2024-07-26 01:16:07.478419] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190e9e10 00:33:37.266 [2024-07-26 01:16:07.479562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.266 [2024-07-26 01:16:07.479590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:37.266 [2024-07-26 01:16:07.490555] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190e6300 00:33:37.266 [2024-07-26 01:16:07.491782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.266 [2024-07-26 01:16:07.491814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:37.266 [2024-07-26 01:16:07.502768] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190e95a0 00:33:37.266 [2024-07-26 01:16:07.504097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.266 [2024-07-26 01:16:07.504125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:37.266 [2024-07-26 01:16:07.513756] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190e73e0 00:33:37.266 [2024-07-26 01:16:07.515085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.266 [2024-07-26 01:16:07.515122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:37.266 [2024-07-26 01:16:07.525813] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190feb58 00:33:37.266 [2024-07-26 01:16:07.527282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.266 [2024-07-26 01:16:07.527310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:37.266 [2024-07-26 01:16:07.537943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190f1430 00:33:37.266 [2024-07-26 01:16:07.539655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.266 [2024-07-26 01:16:07.539683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:37.266 [2024-07-26 01:16:07.550121] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190eaab8 00:33:37.266 [2024-07-26 01:16:07.551886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.266 [2024-07-26 01:16:07.551917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:37.266 [2024-07-26 01:16:07.558426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190df550 00:33:37.266 [2024-07-26 01:16:07.559209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.266 [2024-07-26 01:16:07.559240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:37.266 [2024-07-26 01:16:07.569286] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190f2d80 00:33:37.266 [2024-07-26 01:16:07.570016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.266 [2024-07-26 01:16:07.570043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:37.267 [2024-07-26 01:16:07.581386] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190fc128 00:33:37.267 [2024-07-26 01:16:07.582285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.267 [2024-07-26 01:16:07.582313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:37.267 [2024-07-26 01:16:07.593483] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190e1f80 00:33:37.267 [2024-07-26 01:16:07.594536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.267 [2024-07-26 01:16:07.594564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:37.267 [2024-07-26 01:16:07.605631] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ebfd0 00:33:37.267 [2024-07-26 01:16:07.606828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.267 [2024-07-26 01:16:07.606856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:37.267 [2024-07-26 01:16:07.616608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190e6738 00:33:37.267 [2024-07-26 01:16:07.617362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.267 [2024-07-26 01:16:07.617390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:37.267 [2024-07-26 01:16:07.628215] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ff3c8 00:33:37.267 [2024-07-26 01:16:07.628896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.267 [2024-07-26 01:16:07.628925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:37.267 [2024-07-26 01:16:07.640315] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190fc998 00:33:37.267 [2024-07-26 01:16:07.641253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.267 [2024-07-26 01:16:07.641282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:37.267 [2024-07-26 01:16:07.653661] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190f8618 00:33:37.267 [2024-07-26 01:16:07.655196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.267 [2024-07-26 01:16:07.655231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:37.267 [2024-07-26 01:16:07.664608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190f3e60 00:33:37.267 [2024-07-26 01:16:07.665783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.267 [2024-07-26 01:16:07.665822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:37.267 [2024-07-26 01:16:07.676652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190f6890 00:33:37.267 [2024-07-26 01:16:07.677625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.267 [2024-07-26 01:16:07.677652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:37.267 [2024-07-26 01:16:07.687548] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190f9f68 00:33:37.267 [2024-07-26 01:16:07.689318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.267 [2024-07-26 01:16:07.689347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:37.525 [2024-07-26 01:16:07.698581] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190f6890 00:33:37.525 [2024-07-26 01:16:07.699373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.525 [2024-07-26 01:16:07.699404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:37.525 [2024-07-26 01:16:07.710500] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ef6a8 00:33:37.525 [2024-07-26 01:16:07.711411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.525 [2024-07-26 01:16:07.711439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:37.525 [2024-07-26 01:16:07.721601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190fbcf0 00:33:37.525 [2024-07-26 01:16:07.722514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.525 [2024-07-26 01:16:07.722541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:37.525 [2024-07-26 01:16:07.733747] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190f1430 00:33:37.525 [2024-07-26 01:16:07.734786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.525 [2024-07-26 01:16:07.734814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:37.525 [2024-07-26 01:16:07.745797] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190eee38 00:33:37.525 [2024-07-26 01:16:07.746975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.525 [2024-07-26 01:16:07.747002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:37.525 [2024-07-26 01:16:07.757268] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190e8088 00:33:37.525 [2024-07-26 01:16:07.758984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.525 [2024-07-26 01:16:07.759013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:37.525 [2024-07-26 01:16:07.768028] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190e3d08 00:33:37.525 [2024-07-26 01:16:07.768806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.525 [2024-07-26 01:16:07.768838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:37.525 [2024-07-26 01:16:07.779981] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190df550 00:33:37.526 [2024-07-26 01:16:07.780919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:14476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.526 [2024-07-26 01:16:07.780947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:37.526 [2024-07-26 01:16:07.792088] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190f2510 00:33:37.526 [2024-07-26 01:16:07.793098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.526 [2024-07-26 01:16:07.793126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:37.526 [2024-07-26 01:16:07.804005] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ec408 00:33:37.526 [2024-07-26 01:16:07.805140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.526 [2024-07-26 01:16:07.805170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:37.526 [2024-07-26 01:16:07.816002] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190fbcf0 00:33:37.526 [2024-07-26 01:16:07.817224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.526 [2024-07-26 01:16:07.817252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:37.526 [2024-07-26 01:16:07.829290] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190f1868 00:33:37.526 [2024-07-26 01:16:07.831051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.526 [2024-07-26 01:16:07.831084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:37.526 [2024-07-26 01:16:07.837517] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190f46d0 00:33:37.526 [2024-07-26 01:16:07.838286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.526 [2024-07-26 01:16:07.838314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:37.526 [2024-07-26 01:16:07.848482] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190eb328 00:33:37.526 [2024-07-26 01:16:07.849237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.526 [2024-07-26 01:16:07.849264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:37.526 [2024-07-26 01:16:07.861319] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190fda78 00:33:37.526 [2024-07-26 01:16:07.862259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.526 [2024-07-26 01:16:07.862286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:37.526 [2024-07-26 01:16:07.873218] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190e3d08 00:33:37.526 [2024-07-26 01:16:07.874248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.526 [2024-07-26 01:16:07.874276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:37.526 [2024-07-26 01:16:07.885402] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190fa7d8 00:33:37.526 [2024-07-26 01:16:07.886625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.526 [2024-07-26 01:16:07.886653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:37.526 [2024-07-26 01:16:07.896388] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190f0bc0 00:33:37.526 [2024-07-26 01:16:07.897601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.526 [2024-07-26 01:16:07.897628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:37.526 [2024-07-26 01:16:07.907348] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190e9168 00:33:37.526 [2024-07-26 01:16:07.908082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.526 [2024-07-26 01:16:07.908109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:37.526 [2024-07-26 01:16:07.919014] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ef6a8 00:33:37.526 [2024-07-26 01:16:07.919642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.526 [2024-07-26 01:16:07.919671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:37.526 [2024-07-26 01:16:07.931087] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190df988 00:33:37.526 [2024-07-26 01:16:07.931889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.526 [2024-07-26 01:16:07.931920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:37.526 [2024-07-26 01:16:07.943018] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ddc00 00:33:37.526 [2024-07-26 01:16:07.944016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.526 [2024-07-26 01:16:07.944044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:37.783 [2024-07-26 01:16:07.954408] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190de8a8 00:33:37.783 [2024-07-26 01:16:07.956238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.783 [2024-07-26 01:16:07.956266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:37.783 [2024-07-26 01:16:07.964356] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190eff18 00:33:37.783 [2024-07-26 01:16:07.965112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.783 [2024-07-26 01:16:07.965147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:37.783 [2024-07-26 01:16:07.976572] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190edd58 00:33:37.783 [2024-07-26 01:16:07.977509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.783 [2024-07-26 01:16:07.977538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:37.783 [2024-07-26 01:16:07.988663] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190df988 00:33:37.783 [2024-07-26 01:16:07.989740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.783 [2024-07-26 01:16:07.989768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:37.783 [2024-07-26 01:16:08.000882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190f2510 00:33:37.783 [2024-07-26 01:16:08.002041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.783 [2024-07-26 01:16:08.002101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:37.783 [2024-07-26 01:16:08.012936] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190fb480 00:33:37.783 [2024-07-26 01:16:08.014285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.784 [2024-07-26 01:16:08.014313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:37.784 [2024-07-26 01:16:08.024866] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190edd58 00:33:37.784 [2024-07-26 01:16:08.026338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.784 [2024-07-26 01:16:08.026366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:37.784 [2024-07-26 01:16:08.036987] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ea248 00:33:37.784 [2024-07-26 01:16:08.038703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.784 [2024-07-26 01:16:08.038731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:37.784 [2024-07-26 01:16:08.049048] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190fbcf0 00:33:37.784 [2024-07-26 01:16:08.050813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.784 [2024-07-26 01:16:08.050840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:37.784 [2024-07-26 01:16:08.057279] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190eff18 00:33:37.784 [2024-07-26 01:16:08.058002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.784 [2024-07-26 01:16:08.058028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:37.784 [2024-07-26 01:16:08.070597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190f2d80 00:33:37.784 [2024-07-26 01:16:08.071948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.784 [2024-07-26 01:16:08.071976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:37.784 [2024-07-26 01:16:08.082715] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190e0630 00:33:37.784 [2024-07-26 01:16:08.084206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.784 [2024-07-26 01:16:08.084236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:37.784 [2024-07-26 01:16:08.094779] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190f3e60 00:33:37.784 [2024-07-26 01:16:08.096385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.784 [2024-07-26 01:16:08.096413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:37.784 [2024-07-26 01:16:08.106841] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190e6b70 00:33:37.784 [2024-07-26 01:16:08.108612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.784 [2024-07-26 01:16:08.108640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:37.784 [2024-07-26 01:16:08.115044] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ed0b0 00:33:37.784 [2024-07-26 01:16:08.115783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.784 [2024-07-26 01:16:08.115809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:37.784 [2024-07-26 01:16:08.125969] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190e4578 00:33:37.784 [2024-07-26 01:16:08.126733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.784 [2024-07-26 01:16:08.126760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:37.784 [2024-07-26 01:16:08.138120] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ebfd0 00:33:37.784 [2024-07-26 01:16:08.138976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.784 [2024-07-26 01:16:08.139004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:37.784 [2024-07-26 01:16:08.150211] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190f7970 00:33:37.784 [2024-07-26 01:16:08.151224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.784 [2024-07-26 01:16:08.151252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:37.784 [2024-07-26 01:16:08.162361] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190eaab8 00:33:37.784 [2024-07-26 01:16:08.163568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.784 [2024-07-26 01:16:08.163597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:37.784 [2024-07-26 01:16:08.174593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190e7818 00:33:37.784 [2024-07-26 01:16:08.175939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.784 [2024-07-26 01:16:08.175967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:37.784 [2024-07-26 01:16:08.185506] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190f3a28 00:33:37.784 [2024-07-26 01:16:08.186514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.784 [2024-07-26 01:16:08.186542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:37.784 [2024-07-26 01:16:08.197394] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190de038 00:33:37.784 [2024-07-26 01:16:08.198173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.784 [2024-07-26 01:16:08.198202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:37.784 [2024-07-26 01:16:08.209582] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190e4578 00:33:38.042 [2024-07-26 01:16:08.210549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.042 [2024-07-26 01:16:08.210578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:38.042 [2024-07-26 01:16:08.220695] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190f6cc8 00:33:38.042 [2024-07-26 01:16:08.222461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.042 [2024-07-26 01:16:08.222492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:38.042 [2024-07-26 01:16:08.230625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190f20d8 00:33:38.042 [2024-07-26 01:16:08.231376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.042 [2024-07-26 01:16:08.231405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:38.042 [2024-07-26 01:16:08.242798] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190fdeb0 00:33:38.042 [2024-07-26 01:16:08.243688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.042 [2024-07-26 01:16:08.243716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:38.042 [2024-07-26 01:16:08.254951] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190e95a0 00:33:38.042 [2024-07-26 01:16:08.256017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.042 [2024-07-26 01:16:08.256045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:38.042 [2024-07-26 01:16:08.267216] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190e5658 00:33:38.042 [2024-07-26 01:16:08.268469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.042 [2024-07-26 01:16:08.268507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:38.042 [2024-07-26 01:16:08.278118] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190e3d08 00:33:38.042 [2024-07-26 01:16:08.278874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.042 [2024-07-26 01:16:08.278902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:38.042 [2024-07-26 01:16:08.290982] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190f5be8 00:33:38.042 [2024-07-26 01:16:08.292889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.042 [2024-07-26 01:16:08.292918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.042 [2024-07-26 01:16:08.300947] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190de038 00:33:38.042 [2024-07-26 01:16:08.301898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.042 [2024-07-26 01:16:08.301925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:38.042 [2024-07-26 01:16:08.313121] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190e7c50 00:33:38.042 [2024-07-26 01:16:08.314121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:3752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.042 [2024-07-26 01:16:08.314152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:38.042 [2024-07-26 01:16:08.325263] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190e8d30 00:33:38.042 [2024-07-26 01:16:08.326413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.042 [2024-07-26 01:16:08.326441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:38.042 [2024-07-26 01:16:08.337277] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190e99d8 00:33:38.042 [2024-07-26 01:16:08.338591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.042 [2024-07-26 01:16:08.338618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:38.042 [2024-07-26 01:16:08.349413] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190f2d80 00:33:38.042 [2024-07-26 01:16:08.350882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.042 [2024-07-26 01:16:08.350910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:38.042 [2024-07-26 01:16:08.361635] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190f0bc0 00:33:38.042 [2024-07-26 01:16:08.363262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.042 [2024-07-26 01:16:08.363290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:38.042 [2024-07-26 01:16:08.373691] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190fb8b8 00:33:38.042 [2024-07-26 01:16:08.375509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.042 [2024-07-26 01:16:08.375536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:38.042 [2024-07-26 01:16:08.381890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190e4de8 00:33:38.042 [2024-07-26 01:16:08.382633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.042 [2024-07-26 01:16:08.382661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:38.042 [2024-07-26 01:16:08.394006] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ed4e8 00:33:38.042 [2024-07-26 01:16:08.394920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.042 [2024-07-26 01:16:08.394947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.042 [2024-07-26 01:16:08.405011] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190f46d0 00:33:38.042 [2024-07-26 01:16:08.405914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.042 [2024-07-26 01:16:08.405948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:38.042 [2024-07-26 01:16:08.417320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190fe720 00:33:38.042 [2024-07-26 01:16:08.418333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.042 [2024-07-26 01:16:08.418362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:38.042 [2024-07-26 01:16:08.429493] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190e7c50 00:33:38.042 [2024-07-26 01:16:08.430688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.042 [2024-07-26 01:16:08.430716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:38.042 [2024-07-26 01:16:08.441563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190f1ca0 00:33:38.042 [2024-07-26 01:16:08.442902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.042 [2024-07-26 01:16:08.442929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:38.042 [2024-07-26 01:16:08.452479] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190e4de8 00:33:38.042 [2024-07-26 01:16:08.453385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.042 [2024-07-26 01:16:08.453413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:38.042 [2024-07-26 01:16:08.464152] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190e2c28 00:33:38.042 [2024-07-26 01:16:08.464892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.042 [2024-07-26 01:16:08.464921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:38.300 [2024-07-26 01:16:08.476646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190eb328 00:33:38.300 [2024-07-26 01:16:08.477709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.300 [2024-07-26 01:16:08.477738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:38.300 [2024-07-26 01:16:08.487692] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190e73e0 00:33:38.300 [2024-07-26 01:16:08.489359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:8561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.300 [2024-07-26 01:16:08.489388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:38.300 [2024-07-26 01:16:08.498487] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190f6890 00:33:38.300 [2024-07-26 01:16:08.499228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.300 [2024-07-26 01:16:08.499260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.300 [2024-07-26 01:16:08.510380] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190fc560 00:33:38.300 [2024-07-26 01:16:08.511265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.301 [2024-07-26 01:16:08.511293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.301 [2024-07-26 01:16:08.523571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190f3a28 00:33:38.301 [2024-07-26 01:16:08.525004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.301 [2024-07-26 01:16:08.525031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:38.301 [2024-07-26 01:16:08.535551] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190f8618 00:33:38.301 [2024-07-26 01:16:08.537145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.301 [2024-07-26 01:16:08.537173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.301 [2024-07-26 01:16:08.547741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190fd640 00:33:38.301 [2024-07-26 01:16:08.549500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.301 [2024-07-26 01:16:08.549527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:38.301 [2024-07-26 01:16:08.559855] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190f6cc8 00:33:38.301 [2024-07-26 01:16:08.561751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.301 [2024-07-26 01:16:08.561779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.301 [2024-07-26 01:16:08.568128] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190e0ea0 00:33:38.301 [2024-07-26 01:16:08.568956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.301 [2024-07-26 01:16:08.568982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:38.301 [2024-07-26 01:16:08.579095] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190e23b8 00:33:38.301 [2024-07-26 01:16:08.579973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.301 [2024-07-26 01:16:08.579999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:38.301 [2024-07-26 01:16:08.591118] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ed0b0 00:33:38.301 [2024-07-26 01:16:08.592097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.301 [2024-07-26 01:16:08.592125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.301 [2024-07-26 01:16:08.603136] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190fc998 00:33:38.301 [2024-07-26 01:16:08.604271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.301 [2024-07-26 01:16:08.604299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:38.301 [2024-07-26 01:16:08.615178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190e27f0 00:33:38.301 [2024-07-26 01:16:08.616501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.301 [2024-07-26 01:16:08.616529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.301 [2024-07-26 01:16:08.627173] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190fa7d8 00:33:38.301 [2024-07-26 01:16:08.628622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.301 [2024-07-26 01:16:08.628650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:38.301 [2024-07-26 01:16:08.638759] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ec840 00:33:38.301 [2024-07-26 01:16:08.639886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.301 [2024-07-26 01:16:08.639913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.301 [2024-07-26 01:16:08.651230] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190feb58 00:33:38.301 [2024-07-26 01:16:08.652264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.301 [2024-07-26 01:16:08.652296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:38.301 [2024-07-26 01:16:08.664516] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190e4578 00:33:38.301 [2024-07-26 01:16:08.665711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.301 [2024-07-26 01:16:08.665740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.301 [2024-07-26 01:16:08.676512] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190e0ea0 00:33:38.301 [2024-07-26 01:16:08.678581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.301 [2024-07-26 01:16:08.678616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:38.301 [2024-07-26 01:16:08.688041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190fb048 00:33:38.301 [2024-07-26 01:16:08.689046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.301 [2024-07-26 01:16:08.689095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:38.301 [2024-07-26 01:16:08.702367] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190e6b70 00:33:38.301 [2024-07-26 01:16:08.703951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.301 [2024-07-26 01:16:08.703982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:38.301 [2024-07-26 01:16:08.714165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190e95a0 00:33:38.301 [2024-07-26 01:16:08.715300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.301 [2024-07-26 01:16:08.715328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:38.301 [2024-07-26 01:16:08.727072] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190e6fa8 00:33:38.559 [2024-07-26 01:16:08.728028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.559 [2024-07-26 01:16:08.728066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:38.559 [2024-07-26 01:16:08.740024] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190f7da8 00:33:38.559 [2024-07-26 01:16:08.741345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.559 [2024-07-26 01:16:08.741403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:38.559 [2024-07-26 01:16:08.753053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190dece0 00:33:38.559 [2024-07-26 01:16:08.754480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.559 [2024-07-26 01:16:08.754512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:38.559 [2024-07-26 01:16:08.764909] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190e84c0 00:33:38.559 [2024-07-26 01:16:08.766328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.559 [2024-07-26 01:16:08.766370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:38.559 [2024-07-26 01:16:08.778080] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190f92c0 00:33:38.559 [2024-07-26 01:16:08.779654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.559 [2024-07-26 01:16:08.779685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:38.559 [2024-07-26 01:16:08.791192] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190e2c28 00:33:38.559 [2024-07-26 01:16:08.792929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:18954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.559 [2024-07-26 01:16:08.792971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:38.559 [2024-07-26 01:16:08.802923] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190e5ec8 00:33:38.559 [2024-07-26 01:16:08.804258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.559 [2024-07-26 01:16:08.804285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:38.559 [2024-07-26 01:16:08.815940] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ed0b0 00:33:38.559 [2024-07-26 01:16:08.817130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.560 [2024-07-26 01:16:08.817159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:38.560 [2024-07-26 01:16:08.830453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190e0630 00:33:38.560 [2024-07-26 01:16:08.832553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.560 [2024-07-26 01:16:08.832584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:38.560 [2024-07-26 01:16:08.839340] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190f5be8 00:33:38.560 [2024-07-26 01:16:08.840272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.560 [2024-07-26 01:16:08.840298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:38.560 [2024-07-26 01:16:08.851237] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190e8088 00:33:38.560 [2024-07-26 01:16:08.852113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.560 [2024-07-26 01:16:08.852156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:38.560 [2024-07-26 01:16:08.865224] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190eb760 00:33:38.560 [2024-07-26 01:16:08.866338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.560 [2024-07-26 01:16:08.866379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:38.560 [2024-07-26 01:16:08.878208] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190f92c0 00:33:38.560 [2024-07-26 01:16:08.879439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.560 [2024-07-26 01:16:08.879470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:38.560 [2024-07-26 01:16:08.890019] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190e1710 00:33:38.560 [2024-07-26 01:16:08.891314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.560 [2024-07-26 01:16:08.891358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:38.560 [2024-07-26 01:16:08.903240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ff3c8 00:33:38.560 [2024-07-26 01:16:08.904626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.560 [2024-07-26 01:16:08.904656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:38.560 [2024-07-26 01:16:08.916355] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190f20d8 00:33:38.560 [2024-07-26 01:16:08.917927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.560 [2024-07-26 01:16:08.917958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:38.560 [2024-07-26 01:16:08.928136] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190feb58 00:33:38.560 [2024-07-26 01:16:08.929257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.560 [2024-07-26 01:16:08.929298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:38.560 [2024-07-26 01:16:08.940858] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190eb760 00:33:38.560 [2024-07-26 01:16:08.941831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.560 [2024-07-26 01:16:08.941862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:38.560 [2024-07-26 01:16:08.955305] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190df550 00:33:38.560 [2024-07-26 01:16:08.957254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.560 [2024-07-26 01:16:08.957295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:38.560 [2024-07-26 01:16:08.967105] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190e6fa8 00:33:38.560 [2024-07-26 01:16:08.968563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.560 [2024-07-26 01:16:08.968590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:38.560 [2024-07-26 01:16:08.978558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190f7100 00:33:38.560 [2024-07-26 01:16:08.980649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.560 [2024-07-26 01:16:08.980682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:38.818 [2024-07-26 01:16:08.991207] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ed4e8 00:33:38.818 [2024-07-26 01:16:08.991654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.818 [2024-07-26 01:16:08.991685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:38.818 [2024-07-26 01:16:09.005297] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ed4e8 00:33:38.818 [2024-07-26 01:16:09.005500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.818 [2024-07-26 01:16:09.005532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:38.818 [2024-07-26 01:16:09.019387] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ed4e8 00:33:38.818 [2024-07-26 01:16:09.019611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.818 [2024-07-26 01:16:09.019637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:38.818 [2024-07-26 01:16:09.033496] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ed4e8 00:33:38.818 [2024-07-26 01:16:09.033775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.818 [2024-07-26 01:16:09.033801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:38.818 [2024-07-26 01:16:09.047630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ed4e8 00:33:38.818 [2024-07-26 01:16:09.047916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.818 [2024-07-26 01:16:09.047941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:38.818 [2024-07-26 01:16:09.061684] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ed4e8 00:33:38.818 [2024-07-26 01:16:09.061924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.818 [2024-07-26 01:16:09.061949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:38.818 [2024-07-26 01:16:09.075989] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ed4e8 00:33:38.818 [2024-07-26 01:16:09.076275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.818 [2024-07-26 01:16:09.076304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:38.818 [2024-07-26 01:16:09.090024] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ed4e8 00:33:38.818 [2024-07-26 01:16:09.090296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.818 [2024-07-26 01:16:09.090322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:38.818 [2024-07-26 01:16:09.104165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ed4e8 00:33:38.818 [2024-07-26 01:16:09.104428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.818 [2024-07-26 01:16:09.104458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:38.818 [2024-07-26 01:16:09.118250] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ed4e8 00:33:38.818 [2024-07-26 01:16:09.118523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.818 [2024-07-26 01:16:09.118549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:38.818 [2024-07-26 01:16:09.132347] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ed4e8 00:33:38.818 [2024-07-26 01:16:09.132609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.818 [2024-07-26 01:16:09.132635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:38.818 [2024-07-26 01:16:09.146431] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ed4e8 00:33:38.818 [2024-07-26 01:16:09.146716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.818 [2024-07-26 01:16:09.146743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:38.818 [2024-07-26 01:16:09.160540] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ed4e8 00:33:38.818 [2024-07-26 01:16:09.160762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:14222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.818 [2024-07-26 01:16:09.160804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:38.818 [2024-07-26 01:16:09.174729] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ed4e8 00:33:38.818 [2024-07-26 01:16:09.175005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.818 [2024-07-26 01:16:09.175031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:38.818 [2024-07-26 01:16:09.188844] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ed4e8 00:33:38.818 [2024-07-26 01:16:09.189049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.818 [2024-07-26 01:16:09.189082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:38.818 [2024-07-26 01:16:09.202939] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ed4e8 00:33:38.818 [2024-07-26 01:16:09.203170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.819 [2024-07-26 01:16:09.203197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:38.819 [2024-07-26 01:16:09.217085] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ed4e8 00:33:38.819 [2024-07-26 01:16:09.217325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.819 [2024-07-26 01:16:09.217351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:38.819 [2024-07-26 01:16:09.231417] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ed4e8 00:33:38.819 [2024-07-26 01:16:09.231702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:38.819 [2024-07-26 01:16:09.231728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:39.077 [2024-07-26 01:16:09.245555] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ed4e8 00:33:39.077 [2024-07-26 01:16:09.245794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.077 [2024-07-26 01:16:09.245823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:39.077 [2024-07-26 01:16:09.259737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ed4e8 00:33:39.077 [2024-07-26 01:16:09.260017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.077 [2024-07-26 01:16:09.260042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:39.077 [2024-07-26 01:16:09.273755] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ed4e8 00:33:39.077 [2024-07-26 01:16:09.273992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.077 [2024-07-26 01:16:09.274018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:39.077 [2024-07-26 01:16:09.287874] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ed4e8 00:33:39.077 [2024-07-26 01:16:09.288158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.077 [2024-07-26 01:16:09.288185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:39.077 [2024-07-26 01:16:09.302003] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ed4e8 00:33:39.077 [2024-07-26 01:16:09.302298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.077 [2024-07-26 01:16:09.302325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:39.077 [2024-07-26 01:16:09.316122] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ed4e8 00:33:39.077 [2024-07-26 01:16:09.316369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.077 [2024-07-26 01:16:09.316395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:39.077 [2024-07-26 01:16:09.330172] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ed4e8 00:33:39.077 [2024-07-26 01:16:09.330420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.077 [2024-07-26 01:16:09.330446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:39.077 [2024-07-26 01:16:09.344396] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ed4e8 00:33:39.077 [2024-07-26 01:16:09.344683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.077 [2024-07-26 01:16:09.344710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:39.077 [2024-07-26 01:16:09.358659] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ed4e8 00:33:39.077 [2024-07-26 01:16:09.358865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.077 [2024-07-26 01:16:09.358913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:39.077 [2024-07-26 01:16:09.372872] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ed4e8 00:33:39.077 [2024-07-26 01:16:09.373094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.077 [2024-07-26 01:16:09.373137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:39.077 [2024-07-26 01:16:09.387158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ed4e8 00:33:39.077 [2024-07-26 01:16:09.387417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.077 [2024-07-26 01:16:09.387457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:39.077 [2024-07-26 01:16:09.401197] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ed4e8 00:33:39.077 [2024-07-26 01:16:09.401453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.077 [2024-07-26 01:16:09.401479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:39.077 [2024-07-26 01:16:09.415339] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ed4e8 00:33:39.077 [2024-07-26 01:16:09.415612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.077 [2024-07-26 01:16:09.415638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:39.077 [2024-07-26 01:16:09.429434] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ed4e8 00:33:39.077 [2024-07-26 01:16:09.429729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.077 [2024-07-26 01:16:09.429755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:39.077 [2024-07-26 01:16:09.443587] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d480) with pdu=0x2000190ed4e8 00:33:39.077 [2024-07-26 01:16:09.443827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:39.077 [2024-07-26 01:16:09.443853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:39.077 00:33:39.077 Latency(us) 00:33:39.077 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.077 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:39.077 nvme0n1 : 2.01 20755.32 81.08 0.00 0.00 6152.15 3252.53 15728.64 00:33:39.077 =================================================================================================================== 00:33:39.077 Total : 20755.32 81.08 0.00 0.00 6152.15 3252.53 15728.64 00:33:39.077 0 00:33:39.077 01:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:39.077 01:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:39.077 01:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:39.077 | .driver_specific 00:33:39.077 | .nvme_error 00:33:39.077 | .status_code 00:33:39.077 | .command_transient_transport_error' 00:33:39.077 01:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:39.335 01:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 163 > 0 )) 00:33:39.335 01:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1977128 00:33:39.335 01:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1977128 ']' 00:33:39.335 01:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1977128 00:33:39.335 01:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:33:39.335 01:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:39.335 01:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1977128 00:33:39.335 01:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:39.335 01:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:39.335 01:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1977128' 00:33:39.335 killing process with pid 1977128 00:33:39.335 01:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1977128 00:33:39.335 Received shutdown signal, test time was about 2.000000 seconds 00:33:39.335 00:33:39.335 Latency(us) 00:33:39.335 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.335 =================================================================================================================== 00:33:39.335 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:39.335 01:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1977128 00:33:39.592 01:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:33:39.593 01:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:39.593 01:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:39.593 01:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:39.593 01:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:39.593 01:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1977540 00:33:39.593 01:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:33:39.593 01:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1977540 /var/tmp/bperf.sock 00:33:39.593 01:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1977540 ']' 00:33:39.593 01:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:39.593 01:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:39.593 01:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:39.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:39.593 01:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:39.593 01:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:39.593 [2024-07-26 01:16:09.990799] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:33:39.593 [2024-07-26 01:16:09.990892] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1977540 ] 00:33:39.593 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:39.593 Zero copy mechanism will not be used. 00:33:39.851 EAL: No free 2048 kB hugepages reported on node 1 00:33:39.851 [2024-07-26 01:16:10.054014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:39.851 [2024-07-26 01:16:10.143328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:39.851 01:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:39.851 01:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:33:39.851 01:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:39.851 01:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:40.108 01:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:40.108 01:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.108 01:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:40.108 01:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.108 01:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:40.108 01:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:40.672 nvme0n1 00:33:40.672 01:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:40.672 01:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.672 01:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:40.672 01:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.672 01:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:40.673 01:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:40.673 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:40.673 Zero copy mechanism will not be used. 00:33:40.673 Running I/O for 2 seconds... 00:33:40.673 [2024-07-26 01:16:10.999154] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.673 [2024-07-26 01:16:10.999487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.673 [2024-07-26 01:16:10.999522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.673 [2024-07-26 01:16:11.006925] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.673 [2024-07-26 01:16:11.007257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.673 [2024-07-26 01:16:11.007287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.673 [2024-07-26 01:16:11.014783] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.673 [2024-07-26 01:16:11.015198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.673 [2024-07-26 01:16:11.015226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.673 [2024-07-26 01:16:11.022648] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.673 [2024-07-26 01:16:11.023077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.673 [2024-07-26 01:16:11.023125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.673 [2024-07-26 01:16:11.031105] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.673 [2024-07-26 01:16:11.031442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.673 [2024-07-26 01:16:11.031485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.673 [2024-07-26 01:16:11.038779] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.673 [2024-07-26 01:16:11.039152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.673 [2024-07-26 01:16:11.039180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.673 [2024-07-26 01:16:11.047010] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.673 [2024-07-26 01:16:11.047320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.673 [2024-07-26 01:16:11.047349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.673 [2024-07-26 01:16:11.054831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.673 [2024-07-26 01:16:11.055157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.673 [2024-07-26 01:16:11.055186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.673 [2024-07-26 01:16:11.062749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.673 [2024-07-26 01:16:11.063136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.673 [2024-07-26 01:16:11.063166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.673 [2024-07-26 01:16:11.070796] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.673 [2024-07-26 01:16:11.071198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.673 [2024-07-26 01:16:11.071226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.673 [2024-07-26 01:16:11.078821] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.673 [2024-07-26 01:16:11.079160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.673 [2024-07-26 01:16:11.079190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.673 [2024-07-26 01:16:11.086951] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.673 [2024-07-26 01:16:11.087281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.673 [2024-07-26 01:16:11.087311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.673 [2024-07-26 01:16:11.094884] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.673 [2024-07-26 01:16:11.095197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.673 [2024-07-26 01:16:11.095227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.931 [2024-07-26 01:16:11.102842] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.931 [2024-07-26 01:16:11.103188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.931 [2024-07-26 01:16:11.103220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.931 [2024-07-26 01:16:11.110421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.931 [2024-07-26 01:16:11.110729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.931 [2024-07-26 01:16:11.110758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.931 [2024-07-26 01:16:11.118359] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.931 [2024-07-26 01:16:11.118759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.931 [2024-07-26 01:16:11.118788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.931 [2024-07-26 01:16:11.126525] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.931 [2024-07-26 01:16:11.126837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.931 [2024-07-26 01:16:11.126870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.931 [2024-07-26 01:16:11.134563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.931 [2024-07-26 01:16:11.134907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.931 [2024-07-26 01:16:11.134936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.931 [2024-07-26 01:16:11.142552] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.931 [2024-07-26 01:16:11.142659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.931 [2024-07-26 01:16:11.142687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.931 [2024-07-26 01:16:11.150603] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.931 [2024-07-26 01:16:11.150906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.931 [2024-07-26 01:16:11.150939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.931 [2024-07-26 01:16:11.158660] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.931 [2024-07-26 01:16:11.158982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.931 [2024-07-26 01:16:11.159016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.931 [2024-07-26 01:16:11.166714] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.931 [2024-07-26 01:16:11.167027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.931 [2024-07-26 01:16:11.167079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.931 [2024-07-26 01:16:11.174468] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.931 [2024-07-26 01:16:11.174775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.931 [2024-07-26 01:16:11.174804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.931 [2024-07-26 01:16:11.182716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.931 [2024-07-26 01:16:11.183030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.931 [2024-07-26 01:16:11.183065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.931 [2024-07-26 01:16:11.190686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.931 [2024-07-26 01:16:11.191007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.931 [2024-07-26 01:16:11.191036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.931 [2024-07-26 01:16:11.198646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.931 [2024-07-26 01:16:11.198960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.931 [2024-07-26 01:16:11.198988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.931 [2024-07-26 01:16:11.206859] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.931 [2024-07-26 01:16:11.207181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.931 [2024-07-26 01:16:11.207211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.931 [2024-07-26 01:16:11.214697] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.931 [2024-07-26 01:16:11.214994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.931 [2024-07-26 01:16:11.215027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.931 [2024-07-26 01:16:11.222849] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.932 [2024-07-26 01:16:11.223182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.932 [2024-07-26 01:16:11.223227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.932 [2024-07-26 01:16:11.230684] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.932 [2024-07-26 01:16:11.231109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.932 [2024-07-26 01:16:11.231139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.932 [2024-07-26 01:16:11.238984] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.932 [2024-07-26 01:16:11.239315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.932 [2024-07-26 01:16:11.239345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.932 [2024-07-26 01:16:11.246601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.932 [2024-07-26 01:16:11.246912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.932 [2024-07-26 01:16:11.246941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.932 [2024-07-26 01:16:11.254743] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.932 [2024-07-26 01:16:11.255119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.932 [2024-07-26 01:16:11.255149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.932 [2024-07-26 01:16:11.262777] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.932 [2024-07-26 01:16:11.263115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.932 [2024-07-26 01:16:11.263164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.932 [2024-07-26 01:16:11.270840] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.932 [2024-07-26 01:16:11.271182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.932 [2024-07-26 01:16:11.271212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.932 [2024-07-26 01:16:11.279073] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.932 [2024-07-26 01:16:11.279426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.932 [2024-07-26 01:16:11.279456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.932 [2024-07-26 01:16:11.286365] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.932 [2024-07-26 01:16:11.286474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.932 [2024-07-26 01:16:11.286502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.932 [2024-07-26 01:16:11.294131] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.932 [2024-07-26 01:16:11.294427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.932 [2024-07-26 01:16:11.294464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.932 [2024-07-26 01:16:11.301716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.932 [2024-07-26 01:16:11.302021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.932 [2024-07-26 01:16:11.302072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.932 [2024-07-26 01:16:11.309695] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.932 [2024-07-26 01:16:11.310041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.932 [2024-07-26 01:16:11.310088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.932 [2024-07-26 01:16:11.317333] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.932 [2024-07-26 01:16:11.317621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.932 [2024-07-26 01:16:11.317653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.932 [2024-07-26 01:16:11.324820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.932 [2024-07-26 01:16:11.325105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.932 [2024-07-26 01:16:11.325136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.932 [2024-07-26 01:16:11.332194] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.932 [2024-07-26 01:16:11.332487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.932 [2024-07-26 01:16:11.332520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.932 [2024-07-26 01:16:11.339441] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.932 [2024-07-26 01:16:11.339762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.932 [2024-07-26 01:16:11.339792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.932 [2024-07-26 01:16:11.346745] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.932 [2024-07-26 01:16:11.347056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.932 [2024-07-26 01:16:11.347093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.932 [2024-07-26 01:16:11.354120] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:40.932 [2024-07-26 01:16:11.354412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.932 [2024-07-26 01:16:11.354443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.190 [2024-07-26 01:16:11.361544] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.190 [2024-07-26 01:16:11.361844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.190 [2024-07-26 01:16:11.361876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.190 [2024-07-26 01:16:11.369400] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.190 [2024-07-26 01:16:11.369678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.190 [2024-07-26 01:16:11.369708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.190 [2024-07-26 01:16:11.376874] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.190 [2024-07-26 01:16:11.377200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.190 [2024-07-26 01:16:11.377244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.190 [2024-07-26 01:16:11.384563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.190 [2024-07-26 01:16:11.384833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.190 [2024-07-26 01:16:11.384876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.190 [2024-07-26 01:16:11.392173] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.190 [2024-07-26 01:16:11.392507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.190 [2024-07-26 01:16:11.392541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.190 [2024-07-26 01:16:11.399747] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.190 [2024-07-26 01:16:11.400043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.190 [2024-07-26 01:16:11.400082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.190 [2024-07-26 01:16:11.407027] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.190 [2024-07-26 01:16:11.407331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.190 [2024-07-26 01:16:11.407362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.190 [2024-07-26 01:16:11.414407] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.190 [2024-07-26 01:16:11.414678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.190 [2024-07-26 01:16:11.414708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.190 [2024-07-26 01:16:11.421784] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.190 [2024-07-26 01:16:11.422098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.191 [2024-07-26 01:16:11.422127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.191 [2024-07-26 01:16:11.429087] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.191 [2024-07-26 01:16:11.429363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.191 [2024-07-26 01:16:11.429393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.191 [2024-07-26 01:16:11.436754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.191 [2024-07-26 01:16:11.437022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.191 [2024-07-26 01:16:11.437051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.191 [2024-07-26 01:16:11.444494] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.191 [2024-07-26 01:16:11.444758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.191 [2024-07-26 01:16:11.444801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.191 [2024-07-26 01:16:11.451727] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.191 [2024-07-26 01:16:11.451986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.191 [2024-07-26 01:16:11.452016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.191 [2024-07-26 01:16:11.459603] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.191 [2024-07-26 01:16:11.459925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.191 [2024-07-26 01:16:11.459954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.191 [2024-07-26 01:16:11.466795] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.191 [2024-07-26 01:16:11.467076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.191 [2024-07-26 01:16:11.467119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.191 [2024-07-26 01:16:11.474246] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.191 [2024-07-26 01:16:11.474526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.191 [2024-07-26 01:16:11.474568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.191 [2024-07-26 01:16:11.482070] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.191 [2024-07-26 01:16:11.482330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.191 [2024-07-26 01:16:11.482360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.191 [2024-07-26 01:16:11.489578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.191 [2024-07-26 01:16:11.489923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.191 [2024-07-26 01:16:11.489969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.191 [2024-07-26 01:16:11.497202] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.191 [2024-07-26 01:16:11.497468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.191 [2024-07-26 01:16:11.497498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.191 [2024-07-26 01:16:11.504767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.191 [2024-07-26 01:16:11.505027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.191 [2024-07-26 01:16:11.505057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.191 [2024-07-26 01:16:11.512106] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.191 [2024-07-26 01:16:11.512400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.191 [2024-07-26 01:16:11.512429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.191 [2024-07-26 01:16:11.519316] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.191 [2024-07-26 01:16:11.519610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.191 [2024-07-26 01:16:11.519640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.191 [2024-07-26 01:16:11.526467] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.191 [2024-07-26 01:16:11.526786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.191 [2024-07-26 01:16:11.526816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.191 [2024-07-26 01:16:11.533980] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.191 [2024-07-26 01:16:11.534313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.191 [2024-07-26 01:16:11.534343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.191 [2024-07-26 01:16:11.541785] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.191 [2024-07-26 01:16:11.542072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.191 [2024-07-26 01:16:11.542116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.191 [2024-07-26 01:16:11.549466] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.191 [2024-07-26 01:16:11.549739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.191 [2024-07-26 01:16:11.549769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.191 [2024-07-26 01:16:11.556890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.191 [2024-07-26 01:16:11.557204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.191 [2024-07-26 01:16:11.557234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.191 [2024-07-26 01:16:11.564007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.191 [2024-07-26 01:16:11.564341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.191 [2024-07-26 01:16:11.564371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.191 [2024-07-26 01:16:11.571465] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.191 [2024-07-26 01:16:11.571761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.191 [2024-07-26 01:16:11.571790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.191 [2024-07-26 01:16:11.578656] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.191 [2024-07-26 01:16:11.579008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.191 [2024-07-26 01:16:11.579037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.191 [2024-07-26 01:16:11.586110] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.191 [2024-07-26 01:16:11.586393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.191 [2024-07-26 01:16:11.586422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.191 [2024-07-26 01:16:11.593644] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.191 [2024-07-26 01:16:11.594001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.191 [2024-07-26 01:16:11.594030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.191 [2024-07-26 01:16:11.600801] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.191 [2024-07-26 01:16:11.601089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.191 [2024-07-26 01:16:11.601120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.191 [2024-07-26 01:16:11.608233] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.191 [2024-07-26 01:16:11.608534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.191 [2024-07-26 01:16:11.608563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.191 [2024-07-26 01:16:11.615628] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.191 [2024-07-26 01:16:11.615917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.191 [2024-07-26 01:16:11.615945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.450 [2024-07-26 01:16:11.622908] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.450 [2024-07-26 01:16:11.623216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.450 [2024-07-26 01:16:11.623250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.450 [2024-07-26 01:16:11.630613] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.450 [2024-07-26 01:16:11.630900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.450 [2024-07-26 01:16:11.630945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.450 [2024-07-26 01:16:11.637826] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.450 [2024-07-26 01:16:11.638109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.450 [2024-07-26 01:16:11.638137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.450 [2024-07-26 01:16:11.645474] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.450 [2024-07-26 01:16:11.645840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.450 [2024-07-26 01:16:11.645870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.450 [2024-07-26 01:16:11.653199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.450 [2024-07-26 01:16:11.653540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.450 [2024-07-26 01:16:11.653569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.450 [2024-07-26 01:16:11.660455] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.450 [2024-07-26 01:16:11.660716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.450 [2024-07-26 01:16:11.660744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.450 [2024-07-26 01:16:11.667847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.450 [2024-07-26 01:16:11.668154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.450 [2024-07-26 01:16:11.668183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.450 [2024-07-26 01:16:11.675265] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.450 [2024-07-26 01:16:11.675582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.450 [2024-07-26 01:16:11.675611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.450 [2024-07-26 01:16:11.682670] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.450 [2024-07-26 01:16:11.682969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.450 [2024-07-26 01:16:11.683021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.450 [2024-07-26 01:16:11.690456] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.450 [2024-07-26 01:16:11.690752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.450 [2024-07-26 01:16:11.690785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.450 [2024-07-26 01:16:11.697769] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.450 [2024-07-26 01:16:11.698157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.450 [2024-07-26 01:16:11.698188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.450 [2024-07-26 01:16:11.705520] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.450 [2024-07-26 01:16:11.705926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.450 [2024-07-26 01:16:11.705954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.450 [2024-07-26 01:16:11.713314] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.450 [2024-07-26 01:16:11.713594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.450 [2024-07-26 01:16:11.713623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.450 [2024-07-26 01:16:11.720927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.450 [2024-07-26 01:16:11.721260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.450 [2024-07-26 01:16:11.721291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.450 [2024-07-26 01:16:11.728635] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.450 [2024-07-26 01:16:11.728919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.450 [2024-07-26 01:16:11.728947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.450 [2024-07-26 01:16:11.736492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.450 [2024-07-26 01:16:11.736782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.450 [2024-07-26 01:16:11.736811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.450 [2024-07-26 01:16:11.743994] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.450 [2024-07-26 01:16:11.744289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.450 [2024-07-26 01:16:11.744329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.450 [2024-07-26 01:16:11.751543] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.450 [2024-07-26 01:16:11.751822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.450 [2024-07-26 01:16:11.751849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.450 [2024-07-26 01:16:11.759632] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.450 [2024-07-26 01:16:11.759895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.450 [2024-07-26 01:16:11.759923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.450 [2024-07-26 01:16:11.766996] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.450 [2024-07-26 01:16:11.767352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.450 [2024-07-26 01:16:11.767381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.450 [2024-07-26 01:16:11.774587] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.450 [2024-07-26 01:16:11.774874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.450 [2024-07-26 01:16:11.774903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.450 [2024-07-26 01:16:11.782238] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.450 [2024-07-26 01:16:11.782557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.450 [2024-07-26 01:16:11.782586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.450 [2024-07-26 01:16:11.790109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.450 [2024-07-26 01:16:11.790423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.450 [2024-07-26 01:16:11.790451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.450 [2024-07-26 01:16:11.797800] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.450 [2024-07-26 01:16:11.798115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.450 [2024-07-26 01:16:11.798161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.450 [2024-07-26 01:16:11.805554] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.450 [2024-07-26 01:16:11.805854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.450 [2024-07-26 01:16:11.805882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.450 [2024-07-26 01:16:11.813041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.450 [2024-07-26 01:16:11.813314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.450 [2024-07-26 01:16:11.813342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.450 [2024-07-26 01:16:11.820522] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.450 [2024-07-26 01:16:11.820832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.450 [2024-07-26 01:16:11.820863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.450 [2024-07-26 01:16:11.828854] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.450 [2024-07-26 01:16:11.829245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.450 [2024-07-26 01:16:11.829275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.450 [2024-07-26 01:16:11.837240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.450 [2024-07-26 01:16:11.837607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.450 [2024-07-26 01:16:11.837635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.450 [2024-07-26 01:16:11.846056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.450 [2024-07-26 01:16:11.846463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.450 [2024-07-26 01:16:11.846501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.450 [2024-07-26 01:16:11.855162] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.450 [2024-07-26 01:16:11.855546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.450 [2024-07-26 01:16:11.855585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.451 [2024-07-26 01:16:11.863610] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.451 [2024-07-26 01:16:11.864011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.451 [2024-07-26 01:16:11.864071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.451 [2024-07-26 01:16:11.872170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.451 [2024-07-26 01:16:11.872618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.451 [2024-07-26 01:16:11.872647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.708 [2024-07-26 01:16:11.881295] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.708 [2024-07-26 01:16:11.881683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.708 [2024-07-26 01:16:11.881710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.708 [2024-07-26 01:16:11.889908] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.708 [2024-07-26 01:16:11.890307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.708 [2024-07-26 01:16:11.890365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.708 [2024-07-26 01:16:11.898646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.708 [2024-07-26 01:16:11.898966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.708 [2024-07-26 01:16:11.898995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.708 [2024-07-26 01:16:11.907194] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.709 [2024-07-26 01:16:11.907630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.709 [2024-07-26 01:16:11.907658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.709 [2024-07-26 01:16:11.915700] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.709 [2024-07-26 01:16:11.916083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.709 [2024-07-26 01:16:11.916122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.709 [2024-07-26 01:16:11.923771] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.709 [2024-07-26 01:16:11.924208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.709 [2024-07-26 01:16:11.924237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.709 [2024-07-26 01:16:11.932313] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.709 [2024-07-26 01:16:11.932684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.709 [2024-07-26 01:16:11.932726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.709 [2024-07-26 01:16:11.941189] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.709 [2024-07-26 01:16:11.941588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.709 [2024-07-26 01:16:11.941623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.709 [2024-07-26 01:16:11.949541] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.709 [2024-07-26 01:16:11.949947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.709 [2024-07-26 01:16:11.949974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.709 [2024-07-26 01:16:11.958305] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.709 [2024-07-26 01:16:11.958709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.709 [2024-07-26 01:16:11.958736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.709 [2024-07-26 01:16:11.967083] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.709 [2024-07-26 01:16:11.967442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.709 [2024-07-26 01:16:11.967469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.709 [2024-07-26 01:16:11.975442] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.709 [2024-07-26 01:16:11.975827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.709 [2024-07-26 01:16:11.975875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.709 [2024-07-26 01:16:11.983868] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.709 [2024-07-26 01:16:11.984250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.709 [2024-07-26 01:16:11.984283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.709 [2024-07-26 01:16:11.992543] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.709 [2024-07-26 01:16:11.992925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.709 [2024-07-26 01:16:11.992958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.709 [2024-07-26 01:16:12.000845] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.709 [2024-07-26 01:16:12.001216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.709 [2024-07-26 01:16:12.001248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.709 [2024-07-26 01:16:12.009305] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.709 [2024-07-26 01:16:12.009722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.709 [2024-07-26 01:16:12.009749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.709 [2024-07-26 01:16:12.017518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.709 [2024-07-26 01:16:12.017818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.709 [2024-07-26 01:16:12.017846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.709 [2024-07-26 01:16:12.026323] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.709 [2024-07-26 01:16:12.026709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.709 [2024-07-26 01:16:12.026742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.709 [2024-07-26 01:16:12.035181] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.709 [2024-07-26 01:16:12.035434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.709 [2024-07-26 01:16:12.035463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.709 [2024-07-26 01:16:12.043995] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.709 [2024-07-26 01:16:12.044365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.709 [2024-07-26 01:16:12.044403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.709 [2024-07-26 01:16:12.051775] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.709 [2024-07-26 01:16:12.052027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.709 [2024-07-26 01:16:12.052055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.709 [2024-07-26 01:16:12.059958] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.709 [2024-07-26 01:16:12.060321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.709 [2024-07-26 01:16:12.060367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.709 [2024-07-26 01:16:12.068741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.709 [2024-07-26 01:16:12.069086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.709 [2024-07-26 01:16:12.069113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.709 [2024-07-26 01:16:12.077223] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.709 [2024-07-26 01:16:12.077568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.709 [2024-07-26 01:16:12.077599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.709 [2024-07-26 01:16:12.085384] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.709 [2024-07-26 01:16:12.085721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.709 [2024-07-26 01:16:12.085749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.709 [2024-07-26 01:16:12.093574] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.709 [2024-07-26 01:16:12.093973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.709 [2024-07-26 01:16:12.094006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.709 [2024-07-26 01:16:12.101944] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.709 [2024-07-26 01:16:12.102285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.709 [2024-07-26 01:16:12.102312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.709 [2024-07-26 01:16:12.110650] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.709 [2024-07-26 01:16:12.110917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.709 [2024-07-26 01:16:12.110949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.709 [2024-07-26 01:16:12.118656] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.709 [2024-07-26 01:16:12.118960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.709 [2024-07-26 01:16:12.118988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.709 [2024-07-26 01:16:12.126822] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.709 [2024-07-26 01:16:12.127144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.709 [2024-07-26 01:16:12.127171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.967 [2024-07-26 01:16:12.135376] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.967 [2024-07-26 01:16:12.135642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.967 [2024-07-26 01:16:12.135671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.967 [2024-07-26 01:16:12.144073] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.967 [2024-07-26 01:16:12.144410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.967 [2024-07-26 01:16:12.144438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.967 [2024-07-26 01:16:12.152375] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.967 [2024-07-26 01:16:12.152735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.967 [2024-07-26 01:16:12.152763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.967 [2024-07-26 01:16:12.161033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.967 [2024-07-26 01:16:12.161407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.967 [2024-07-26 01:16:12.161435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.967 [2024-07-26 01:16:12.169394] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.967 [2024-07-26 01:16:12.169763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.967 [2024-07-26 01:16:12.169801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.967 [2024-07-26 01:16:12.177896] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.967 [2024-07-26 01:16:12.178303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.967 [2024-07-26 01:16:12.178331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.968 [2024-07-26 01:16:12.186268] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.968 [2024-07-26 01:16:12.186681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.968 [2024-07-26 01:16:12.186717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.968 [2024-07-26 01:16:12.195015] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.968 [2024-07-26 01:16:12.195373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.968 [2024-07-26 01:16:12.195403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.968 [2024-07-26 01:16:12.203125] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.968 [2024-07-26 01:16:12.203531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.968 [2024-07-26 01:16:12.203558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.968 [2024-07-26 01:16:12.211556] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.968 [2024-07-26 01:16:12.211851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.968 [2024-07-26 01:16:12.211883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.968 [2024-07-26 01:16:12.220199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.968 [2024-07-26 01:16:12.220581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.968 [2024-07-26 01:16:12.220611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.968 [2024-07-26 01:16:12.228164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.968 [2024-07-26 01:16:12.228510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.968 [2024-07-26 01:16:12.228539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.968 [2024-07-26 01:16:12.236396] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.968 [2024-07-26 01:16:12.236665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.968 [2024-07-26 01:16:12.236692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.968 [2024-07-26 01:16:12.244731] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.968 [2024-07-26 01:16:12.245076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.968 [2024-07-26 01:16:12.245118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.968 [2024-07-26 01:16:12.253300] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.968 [2024-07-26 01:16:12.253627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.968 [2024-07-26 01:16:12.253657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.968 [2024-07-26 01:16:12.261584] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.968 [2024-07-26 01:16:12.261853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.968 [2024-07-26 01:16:12.261881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.968 [2024-07-26 01:16:12.269391] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.968 [2024-07-26 01:16:12.269769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.968 [2024-07-26 01:16:12.269800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.968 [2024-07-26 01:16:12.277227] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.968 [2024-07-26 01:16:12.277565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.968 [2024-07-26 01:16:12.277593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.968 [2024-07-26 01:16:12.285466] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.968 [2024-07-26 01:16:12.285844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.968 [2024-07-26 01:16:12.285881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.968 [2024-07-26 01:16:12.293778] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.968 [2024-07-26 01:16:12.294099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.968 [2024-07-26 01:16:12.294127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.968 [2024-07-26 01:16:12.302419] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.968 [2024-07-26 01:16:12.302781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.968 [2024-07-26 01:16:12.302832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.968 [2024-07-26 01:16:12.310967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.968 [2024-07-26 01:16:12.311327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.968 [2024-07-26 01:16:12.311355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.968 [2024-07-26 01:16:12.319569] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.968 [2024-07-26 01:16:12.319934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.968 [2024-07-26 01:16:12.319961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.968 [2024-07-26 01:16:12.327695] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.968 [2024-07-26 01:16:12.328080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.968 [2024-07-26 01:16:12.328124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.968 [2024-07-26 01:16:12.336044] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.968 [2024-07-26 01:16:12.336407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.968 [2024-07-26 01:16:12.336439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.968 [2024-07-26 01:16:12.344485] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.968 [2024-07-26 01:16:12.344756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.968 [2024-07-26 01:16:12.344783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.968 [2024-07-26 01:16:12.352639] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.968 [2024-07-26 01:16:12.353001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.968 [2024-07-26 01:16:12.353028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.968 [2024-07-26 01:16:12.361031] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.968 [2024-07-26 01:16:12.361339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.968 [2024-07-26 01:16:12.361371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.968 [2024-07-26 01:16:12.369666] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.968 [2024-07-26 01:16:12.370015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.968 [2024-07-26 01:16:12.370043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.968 [2024-07-26 01:16:12.378281] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.968 [2024-07-26 01:16:12.378681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.968 [2024-07-26 01:16:12.378708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.968 [2024-07-26 01:16:12.386583] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:41.969 [2024-07-26 01:16:12.386913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.969 [2024-07-26 01:16:12.386940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.227 [2024-07-26 01:16:12.395010] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.227 [2024-07-26 01:16:12.395389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.227 [2024-07-26 01:16:12.395418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.227 [2024-07-26 01:16:12.404305] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.227 [2024-07-26 01:16:12.404641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.227 [2024-07-26 01:16:12.404669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.227 [2024-07-26 01:16:12.412825] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.227 [2024-07-26 01:16:12.413152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.227 [2024-07-26 01:16:12.413180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.227 [2024-07-26 01:16:12.421045] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.227 [2024-07-26 01:16:12.421366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.227 [2024-07-26 01:16:12.421395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.227 [2024-07-26 01:16:12.429386] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.227 [2024-07-26 01:16:12.429720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.227 [2024-07-26 01:16:12.429752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.227 [2024-07-26 01:16:12.437877] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.227 [2024-07-26 01:16:12.438295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.227 [2024-07-26 01:16:12.438323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.227 [2024-07-26 01:16:12.446366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.227 [2024-07-26 01:16:12.446682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.227 [2024-07-26 01:16:12.446709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.227 [2024-07-26 01:16:12.454278] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.227 [2024-07-26 01:16:12.454576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.227 [2024-07-26 01:16:12.454604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.227 [2024-07-26 01:16:12.462237] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.227 [2024-07-26 01:16:12.462565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.227 [2024-07-26 01:16:12.462592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.227 [2024-07-26 01:16:12.470571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.227 [2024-07-26 01:16:12.470945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.227 [2024-07-26 01:16:12.470983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.227 [2024-07-26 01:16:12.479335] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.227 [2024-07-26 01:16:12.479693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.227 [2024-07-26 01:16:12.479720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.227 [2024-07-26 01:16:12.487989] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.227 [2024-07-26 01:16:12.488399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.227 [2024-07-26 01:16:12.488427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.227 [2024-07-26 01:16:12.495728] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.227 [2024-07-26 01:16:12.496078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.227 [2024-07-26 01:16:12.496107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.227 [2024-07-26 01:16:12.504087] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.227 [2024-07-26 01:16:12.504415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.227 [2024-07-26 01:16:12.504443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.227 [2024-07-26 01:16:12.512743] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.227 [2024-07-26 01:16:12.513064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.227 [2024-07-26 01:16:12.513107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.227 [2024-07-26 01:16:12.521250] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.227 [2024-07-26 01:16:12.521622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.227 [2024-07-26 01:16:12.521650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.227 [2024-07-26 01:16:12.529705] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.227 [2024-07-26 01:16:12.530072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.227 [2024-07-26 01:16:12.530100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.227 [2024-07-26 01:16:12.537857] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.227 [2024-07-26 01:16:12.538272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.227 [2024-07-26 01:16:12.538302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.227 [2024-07-26 01:16:12.546108] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.227 [2024-07-26 01:16:12.546455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.227 [2024-07-26 01:16:12.546487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.227 [2024-07-26 01:16:12.554412] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.227 [2024-07-26 01:16:12.554705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.227 [2024-07-26 01:16:12.554751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.227 [2024-07-26 01:16:12.562980] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.227 [2024-07-26 01:16:12.563287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.227 [2024-07-26 01:16:12.563333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.227 [2024-07-26 01:16:12.571264] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.227 [2024-07-26 01:16:12.571655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.227 [2024-07-26 01:16:12.571683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.227 [2024-07-26 01:16:12.579668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.227 [2024-07-26 01:16:12.580045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.227 [2024-07-26 01:16:12.580088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.227 [2024-07-26 01:16:12.588537] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.227 [2024-07-26 01:16:12.588940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.227 [2024-07-26 01:16:12.588967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.227 [2024-07-26 01:16:12.596868] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.227 [2024-07-26 01:16:12.597208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.227 [2024-07-26 01:16:12.597236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.227 [2024-07-26 01:16:12.605377] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.227 [2024-07-26 01:16:12.605755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.227 [2024-07-26 01:16:12.605787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.228 [2024-07-26 01:16:12.614203] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.228 [2024-07-26 01:16:12.614451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.228 [2024-07-26 01:16:12.614479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.228 [2024-07-26 01:16:12.622624] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.228 [2024-07-26 01:16:12.622951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.228 [2024-07-26 01:16:12.622984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.228 [2024-07-26 01:16:12.631220] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.228 [2024-07-26 01:16:12.631530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.228 [2024-07-26 01:16:12.631558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.228 [2024-07-26 01:16:12.639575] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.228 [2024-07-26 01:16:12.639948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.228 [2024-07-26 01:16:12.639984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.228 [2024-07-26 01:16:12.647724] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.228 [2024-07-26 01:16:12.648121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.228 [2024-07-26 01:16:12.648150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.486 [2024-07-26 01:16:12.656351] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.486 [2024-07-26 01:16:12.656787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.486 [2024-07-26 01:16:12.656817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.486 [2024-07-26 01:16:12.665282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.486 [2024-07-26 01:16:12.665666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.486 [2024-07-26 01:16:12.665695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.486 [2024-07-26 01:16:12.673514] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.486 [2024-07-26 01:16:12.673805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.486 [2024-07-26 01:16:12.673832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.486 [2024-07-26 01:16:12.681983] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.486 [2024-07-26 01:16:12.682268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.486 [2024-07-26 01:16:12.682307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.486 [2024-07-26 01:16:12.690382] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.486 [2024-07-26 01:16:12.690703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.486 [2024-07-26 01:16:12.690739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.486 [2024-07-26 01:16:12.699460] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.486 [2024-07-26 01:16:12.699840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.486 [2024-07-26 01:16:12.699868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.486 [2024-07-26 01:16:12.708527] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.486 [2024-07-26 01:16:12.708887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.486 [2024-07-26 01:16:12.708930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.486 [2024-07-26 01:16:12.717208] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.486 [2024-07-26 01:16:12.717579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.486 [2024-07-26 01:16:12.717607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.486 [2024-07-26 01:16:12.726072] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.486 [2024-07-26 01:16:12.726499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.486 [2024-07-26 01:16:12.726526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.486 [2024-07-26 01:16:12.734386] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.486 [2024-07-26 01:16:12.734749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.486 [2024-07-26 01:16:12.734777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.486 [2024-07-26 01:16:12.742799] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.486 [2024-07-26 01:16:12.743101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.486 [2024-07-26 01:16:12.743138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.486 [2024-07-26 01:16:12.751857] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.486 [2024-07-26 01:16:12.752243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.486 [2024-07-26 01:16:12.752272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.486 [2024-07-26 01:16:12.760776] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.486 [2024-07-26 01:16:12.761174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.486 [2024-07-26 01:16:12.761202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.486 [2024-07-26 01:16:12.769268] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.486 [2024-07-26 01:16:12.769663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.486 [2024-07-26 01:16:12.769691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.486 [2024-07-26 01:16:12.777561] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.486 [2024-07-26 01:16:12.777928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.486 [2024-07-26 01:16:12.777969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.486 [2024-07-26 01:16:12.785890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.486 [2024-07-26 01:16:12.786298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.486 [2024-07-26 01:16:12.786327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.486 [2024-07-26 01:16:12.794436] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.486 [2024-07-26 01:16:12.794821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.486 [2024-07-26 01:16:12.794850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.486 [2024-07-26 01:16:12.803155] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.486 [2024-07-26 01:16:12.803564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.486 [2024-07-26 01:16:12.803606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.486 [2024-07-26 01:16:12.811757] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.486 [2024-07-26 01:16:12.812043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.486 [2024-07-26 01:16:12.812079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.486 [2024-07-26 01:16:12.820368] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.486 [2024-07-26 01:16:12.820753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.486 [2024-07-26 01:16:12.820779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.486 [2024-07-26 01:16:12.829053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.486 [2024-07-26 01:16:12.829406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.486 [2024-07-26 01:16:12.829452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.486 [2024-07-26 01:16:12.837094] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.486 [2024-07-26 01:16:12.837469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.487 [2024-07-26 01:16:12.837511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.487 [2024-07-26 01:16:12.845780] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.487 [2024-07-26 01:16:12.846151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.487 [2024-07-26 01:16:12.846194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.487 [2024-07-26 01:16:12.854266] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.487 [2024-07-26 01:16:12.854627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.487 [2024-07-26 01:16:12.854655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.487 [2024-07-26 01:16:12.862231] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.487 [2024-07-26 01:16:12.862679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.487 [2024-07-26 01:16:12.862712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.487 [2024-07-26 01:16:12.870705] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.487 [2024-07-26 01:16:12.871086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.487 [2024-07-26 01:16:12.871113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.487 [2024-07-26 01:16:12.879724] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.487 [2024-07-26 01:16:12.880028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.487 [2024-07-26 01:16:12.880065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.487 [2024-07-26 01:16:12.887373] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.487 [2024-07-26 01:16:12.887719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.487 [2024-07-26 01:16:12.887747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.487 [2024-07-26 01:16:12.895815] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.487 [2024-07-26 01:16:12.896248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.487 [2024-07-26 01:16:12.896276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.487 [2024-07-26 01:16:12.904406] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.487 [2024-07-26 01:16:12.904767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.487 [2024-07-26 01:16:12.904797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.744 [2024-07-26 01:16:12.912876] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.744 [2024-07-26 01:16:12.913226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.744 [2024-07-26 01:16:12.913263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.744 [2024-07-26 01:16:12.921473] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.744 [2024-07-26 01:16:12.921838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.744 [2024-07-26 01:16:12.921866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.744 [2024-07-26 01:16:12.930442] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.744 [2024-07-26 01:16:12.930841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.744 [2024-07-26 01:16:12.930870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.744 [2024-07-26 01:16:12.938693] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.744 [2024-07-26 01:16:12.939026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.744 [2024-07-26 01:16:12.939054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.744 [2024-07-26 01:16:12.947158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.744 [2024-07-26 01:16:12.947561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.744 [2024-07-26 01:16:12.947588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.744 [2024-07-26 01:16:12.955523] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.744 [2024-07-26 01:16:12.955791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.744 [2024-07-26 01:16:12.955818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.744 [2024-07-26 01:16:12.963971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.744 [2024-07-26 01:16:12.964266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.745 [2024-07-26 01:16:12.964295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.745 [2024-07-26 01:16:12.972009] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.745 [2024-07-26 01:16:12.972425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.745 [2024-07-26 01:16:12.972453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.745 [2024-07-26 01:16:12.980461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.745 [2024-07-26 01:16:12.980823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.745 [2024-07-26 01:16:12.980851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.745 [2024-07-26 01:16:12.988434] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe3d7c0) with pdu=0x2000190fef90 00:33:42.745 [2024-07-26 01:16:12.988624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.745 [2024-07-26 01:16:12.988655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.745 00:33:42.745 Latency(us) 00:33:42.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:42.745 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:42.745 nvme0n1 : 2.00 3807.07 475.88 0.00 0.00 4192.73 3301.07 14660.65 00:33:42.745 =================================================================================================================== 00:33:42.745 Total : 3807.07 475.88 0.00 0.00 4192.73 3301.07 14660.65 00:33:42.745 0 00:33:42.745 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:42.745 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:42.745 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:42.745 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:42.745 | .driver_specific 00:33:42.745 | .nvme_error 00:33:42.745 | .status_code 00:33:42.745 | .command_transient_transport_error' 00:33:43.003 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 246 > 0 )) 00:33:43.003 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1977540 00:33:43.003 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1977540 ']' 00:33:43.003 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1977540 00:33:43.003 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:33:43.003 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:43.003 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1977540 00:33:43.003 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:43.003 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:43.003 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1977540' 00:33:43.003 killing process with pid 1977540 00:33:43.003 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1977540 00:33:43.003 Received shutdown signal, test time was about 2.000000 seconds 00:33:43.003 00:33:43.003 Latency(us) 00:33:43.003 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:43.003 =================================================================================================================== 00:33:43.003 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:43.003 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1977540 00:33:43.260 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1976177 00:33:43.260 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1976177 ']' 00:33:43.260 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1976177 00:33:43.260 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:33:43.260 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:43.260 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1976177 00:33:43.260 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:43.260 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:43.260 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1976177' 00:33:43.260 killing process with pid 1976177 00:33:43.260 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1976177 00:33:43.260 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1976177 00:33:43.517 00:33:43.517 real 0m15.062s 00:33:43.517 user 0m29.545s 00:33:43.517 sys 0m4.204s 00:33:43.517 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:43.517 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:43.517 ************************************ 00:33:43.517 END TEST nvmf_digest_error 00:33:43.517 ************************************ 00:33:43.517 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:33:43.517 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:33:43.517 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:43.517 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:33:43.517 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:43.517 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:33:43.517 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:43.517 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:43.517 rmmod nvme_tcp 00:33:43.517 rmmod nvme_fabrics 00:33:43.517 rmmod nvme_keyring 00:33:43.517 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:43.517 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:33:43.517 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:33:43.517 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1976177 ']' 00:33:43.517 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1976177 00:33:43.517 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 1976177 ']' 00:33:43.517 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 1976177 00:33:43.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1976177) - No such process 00:33:43.517 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 1976177 is not found' 00:33:43.517 Process with pid 1976177 is not found 00:33:43.517 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:43.517 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:43.517 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:43.517 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:43.517 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:43.517 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:43.517 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:43.518 01:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:46.083 01:16:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:46.083 00:33:46.083 real 0m34.686s 00:33:46.083 user 1m0.619s 00:33:46.083 sys 0m9.849s 00:33:46.083 01:16:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:46.083 01:16:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:46.083 ************************************ 00:33:46.083 END TEST nvmf_digest 00:33:46.083 ************************************ 00:33:46.083 01:16:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:33:46.083 01:16:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:33:46.083 01:16:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:33:46.083 01:16:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:46.083 01:16:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:46.083 01:16:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:46.084 01:16:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.084 ************************************ 00:33:46.084 START TEST nvmf_bdevperf 00:33:46.084 ************************************ 00:33:46.084 01:16:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:46.084 * Looking for test storage... 00:33:46.084 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:33:46.084 01:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:47.460 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:47.460 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:47.460 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:47.460 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:47.460 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:47.719 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:47.719 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:47.719 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:47.719 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:47.719 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:47.719 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:47.719 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:47.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:47.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:33:47.719 00:33:47.719 --- 10.0.0.2 ping statistics --- 00:33:47.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:47.719 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:33:47.719 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:47.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:47.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:33:47.719 00:33:47.719 --- 10.0.0.1 ping statistics --- 00:33:47.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:47.719 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:33:47.719 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:47.719 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:33:47.719 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:47.719 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:47.719 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:47.719 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:47.719 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:47.719 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:47.719 01:16:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:47.719 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:33:47.719 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:47.719 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:47.719 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:47.720 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:47.720 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1979887 00:33:47.720 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:47.720 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1979887 00:33:47.720 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1979887 ']' 00:33:47.720 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:47.720 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:47.720 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:47.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:47.720 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:47.720 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:47.720 [2024-07-26 01:16:18.065181] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:33:47.720 [2024-07-26 01:16:18.065258] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:47.720 EAL: No free 2048 kB hugepages reported on node 1 00:33:47.720 [2024-07-26 01:16:18.127698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:47.978 [2024-07-26 01:16:18.213205] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:47.978 [2024-07-26 01:16:18.213259] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:47.978 [2024-07-26 01:16:18.213283] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:47.978 [2024-07-26 01:16:18.213294] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:47.978 [2024-07-26 01:16:18.213303] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:47.978 [2024-07-26 01:16:18.213453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:47.978 [2024-07-26 01:16:18.213513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:47.978 [2024-07-26 01:16:18.213515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:47.978 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:47.978 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:33:47.978 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:47.979 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:47.979 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:47.979 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:47.979 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:47.979 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.979 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:47.979 [2024-07-26 01:16:18.340221] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:47.979 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.979 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:47.979 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.979 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:47.979 Malloc0 00:33:47.979 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.979 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:47.979 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.979 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:47.979 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.979 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:47.979 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.979 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:47.979 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.979 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:47.979 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.979 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:47.979 [2024-07-26 01:16:18.403830] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:48.237 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.237 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:33:48.237 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:33:48.237 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:48.237 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:48.237 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:48.237 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:48.237 { 00:33:48.237 "params": { 00:33:48.237 "name": "Nvme$subsystem", 00:33:48.237 "trtype": "$TEST_TRANSPORT", 00:33:48.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:48.237 "adrfam": "ipv4", 00:33:48.237 "trsvcid": "$NVMF_PORT", 00:33:48.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:48.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:48.237 "hdgst": ${hdgst:-false}, 00:33:48.237 "ddgst": ${ddgst:-false} 00:33:48.237 }, 00:33:48.237 "method": "bdev_nvme_attach_controller" 00:33:48.237 } 00:33:48.237 EOF 00:33:48.237 )") 00:33:48.237 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:48.237 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:48.237 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:48.237 01:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:48.237 "params": { 00:33:48.237 "name": "Nvme1", 00:33:48.237 "trtype": "tcp", 00:33:48.237 "traddr": "10.0.0.2", 00:33:48.237 "adrfam": "ipv4", 00:33:48.237 "trsvcid": "4420", 00:33:48.237 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:48.237 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:48.237 "hdgst": false, 00:33:48.237 "ddgst": false 00:33:48.237 }, 00:33:48.237 "method": "bdev_nvme_attach_controller" 00:33:48.237 }' 00:33:48.237 [2024-07-26 01:16:18.447842] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:33:48.237 [2024-07-26 01:16:18.447931] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1979909 ] 00:33:48.237 EAL: No free 2048 kB hugepages reported on node 1 00:33:48.237 [2024-07-26 01:16:18.507675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:48.237 [2024-07-26 01:16:18.595247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:48.495 Running I/O for 1 seconds... 00:33:49.869 00:33:49.869 Latency(us) 00:33:49.869 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:49.869 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:49.869 Verification LBA range: start 0x0 length 0x4000 00:33:49.869 Nvme1n1 : 1.01 7904.71 30.88 0.00 0.00 16127.44 825.27 18447.17 00:33:49.869 =================================================================================================================== 00:33:49.869 Total : 7904.71 30.88 0.00 0.00 16127.44 825.27 18447.17 00:33:49.869 01:16:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1980174 00:33:49.869 01:16:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:33:49.869 01:16:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:33:49.869 01:16:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:33:49.869 01:16:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:49.869 01:16:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:49.869 01:16:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:49.869 01:16:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:49.869 { 00:33:49.869 "params": { 00:33:49.869 "name": "Nvme$subsystem", 00:33:49.869 "trtype": "$TEST_TRANSPORT", 00:33:49.869 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:49.869 "adrfam": "ipv4", 00:33:49.869 "trsvcid": "$NVMF_PORT", 00:33:49.869 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:49.869 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:49.869 "hdgst": ${hdgst:-false}, 00:33:49.869 "ddgst": ${ddgst:-false} 00:33:49.869 }, 00:33:49.869 "method": "bdev_nvme_attach_controller" 00:33:49.869 } 00:33:49.869 EOF 00:33:49.869 )") 00:33:49.869 01:16:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:49.869 01:16:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:49.869 01:16:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:49.869 01:16:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:49.869 "params": { 00:33:49.869 "name": "Nvme1", 00:33:49.869 "trtype": "tcp", 00:33:49.869 "traddr": "10.0.0.2", 00:33:49.869 "adrfam": "ipv4", 00:33:49.869 "trsvcid": "4420", 00:33:49.869 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:49.869 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:49.869 "hdgst": false, 00:33:49.869 "ddgst": false 00:33:49.869 }, 00:33:49.869 "method": "bdev_nvme_attach_controller" 00:33:49.869 }' 00:33:49.869 [2024-07-26 01:16:20.163869] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:33:49.869 [2024-07-26 01:16:20.163960] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1980174 ] 00:33:49.869 EAL: No free 2048 kB hugepages reported on node 1 00:33:49.869 [2024-07-26 01:16:20.225056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:50.127 [2024-07-26 01:16:20.310763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:50.127 Running I/O for 15 seconds... 00:33:53.411 01:16:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1979887 00:33:53.411 01:16:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:33:53.411 [2024-07-26 01:16:23.134740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:31432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.411 [2024-07-26 01:16:23.134793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.411 [2024-07-26 01:16:23.134828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:31440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.411 [2024-07-26 01:16:23.134847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.411 [2024-07-26 01:16:23.134868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.411 [2024-07-26 01:16:23.134886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.411 [2024-07-26 01:16:23.134905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.411 [2024-07-26 01:16:23.134922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.411 [2024-07-26 01:16:23.134940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:31464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.411 [2024-07-26 01:16:23.134955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.411 [2024-07-26 01:16:23.134974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:31472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.411 [2024-07-26 01:16:23.134992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.411 [2024-07-26 01:16:23.135012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.411 [2024-07-26 01:16:23.135028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.411 [2024-07-26 01:16:23.135056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.411 [2024-07-26 01:16:23.135084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.411 [2024-07-26 01:16:23.135124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.411 [2024-07-26 01:16:23.135139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.411 [2024-07-26 01:16:23.135155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:31504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.411 [2024-07-26 01:16:23.135169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.411 [2024-07-26 01:16:23.135187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:31512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.411 [2024-07-26 01:16:23.135202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.411 [2024-07-26 01:16:23.135218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:31520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.411 [2024-07-26 01:16:23.135232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.411 [2024-07-26 01:16:23.135248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.411 [2024-07-26 01:16:23.135262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.411 [2024-07-26 01:16:23.135277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:31536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.411 [2024-07-26 01:16:23.135290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.411 [2024-07-26 01:16:23.135305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.411 [2024-07-26 01:16:23.135318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.411 [2024-07-26 01:16:23.135333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.411 [2024-07-26 01:16:23.135363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.411 [2024-07-26 01:16:23.135385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:31560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.411 [2024-07-26 01:16:23.135399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.411 [2024-07-26 01:16:23.135417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:31568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.411 [2024-07-26 01:16:23.135432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.411 [2024-07-26 01:16:23.135449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:31576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.411 [2024-07-26 01:16:23.135465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.411 [2024-07-26 01:16:23.135483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:31584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.411 [2024-07-26 01:16:23.135507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.411 [2024-07-26 01:16:23.135525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:31592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.411 [2024-07-26 01:16:23.135540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.411 [2024-07-26 01:16:23.135558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:31600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.411 [2024-07-26 01:16:23.135573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.411 [2024-07-26 01:16:23.135590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:31608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.411 [2024-07-26 01:16:23.135605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.411 [2024-07-26 01:16:23.135623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:31616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.411 [2024-07-26 01:16:23.135638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.411 [2024-07-26 01:16:23.135654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:31624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.411 [2024-07-26 01:16:23.135669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.411 [2024-07-26 01:16:23.135686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.411 [2024-07-26 01:16:23.135702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.411 [2024-07-26 01:16:23.135719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:31640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.411 [2024-07-26 01:16:23.135734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.411 [2024-07-26 01:16:23.135750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.411 [2024-07-26 01:16:23.135766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.411 [2024-07-26 01:16:23.135783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.411 [2024-07-26 01:16:23.135798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.411 [2024-07-26 01:16:23.135815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.411 [2024-07-26 01:16:23.135830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.411 [2024-07-26 01:16:23.135847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:31672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.411 [2024-07-26 01:16:23.135861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.411 [2024-07-26 01:16:23.135878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.411 [2024-07-26 01:16:23.135894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.411 [2024-07-26 01:16:23.135910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.411 [2024-07-26 01:16:23.135929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.411 [2024-07-26 01:16:23.135947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:31696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.411 [2024-07-26 01:16:23.135963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.411 [2024-07-26 01:16:23.135979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:31704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.411 [2024-07-26 01:16:23.135994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.411 [2024-07-26 01:16:23.136011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.411 [2024-07-26 01:16:23.136026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.411 [2024-07-26 01:16:23.136044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.411 [2024-07-26 01:16:23.136066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.411 [2024-07-26 01:16:23.136084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.411 [2024-07-26 01:16:23.136115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.411 [2024-07-26 01:16:23.136132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:31736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.411 [2024-07-26 01:16:23.136146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.411 [2024-07-26 01:16:23.136161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:31816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.136175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.136191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.136205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.136220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:31832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.136234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.136249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.136263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.136278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:31848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.136291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.136306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:31856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.136320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.136339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:31864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.136371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.136388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.136404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.136421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:31880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.136436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.136453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:31888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.136468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.136485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.136500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.136517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:31904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.136532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.136549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:31912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.136564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.136581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:31920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.136597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.136614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:31928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.136629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.136646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:31936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.136662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.136679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.136694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.136710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:31952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.136725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.136742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:31960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.136761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.136779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:31968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.136794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.136810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:31976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.136826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.136843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:31984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.136858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.136874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:31992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.136890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.136907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:32000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.136922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.136939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:32008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.136954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.136971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:32016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.136986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.137003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:32024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.137018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.137035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:32032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.137050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.137075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.137092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.137133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:32048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.137150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.137166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.137180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.137199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:32064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.137214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.137230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.137244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.137259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:32080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.137273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.137288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:32088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.137302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.137317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:32096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.137331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.137363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.137380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.137397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.137412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.137430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:32120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.137445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.137462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:32128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.137478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.137494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:32136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.137510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.137527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:32144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.137543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.137561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:32152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.137576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.137593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:32160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.137609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.137630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:32168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.137647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.137665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:32176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.137680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.137697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.137713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.137730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:32192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.137746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.137763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:32200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.137779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.137796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:32208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.137812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.137829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:32216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.137845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.137862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:32224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.137877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.137894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:32232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.137909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.137926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:32240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.137942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.137958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:32248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.137974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.137990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:32256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.138006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.138023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:32264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.138042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.138067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:32272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.138085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.138118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:32280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.138133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.138148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:32288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.138162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.138178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:32296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.138193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.138209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:32304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.138223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.138238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:32312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.138253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.138269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:32320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.138283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.138299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.138313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.138329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:32336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.138343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.138375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.138402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.138419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:32352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.138435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.138452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:32360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.138469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.138490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:32368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.138506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.138523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:32376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.138539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.138556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:32384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.138572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.412 [2024-07-26 01:16:23.138590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:32392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.412 [2024-07-26 01:16:23.138605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.413 [2024-07-26 01:16:23.138622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:32400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.413 [2024-07-26 01:16:23.138637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.413 [2024-07-26 01:16:23.138654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:32408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.413 [2024-07-26 01:16:23.138669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.413 [2024-07-26 01:16:23.138686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:32416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.413 [2024-07-26 01:16:23.138702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.413 [2024-07-26 01:16:23.138719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:32424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.413 [2024-07-26 01:16:23.138734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.413 [2024-07-26 01:16:23.138751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:32432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.413 [2024-07-26 01:16:23.138766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.413 [2024-07-26 01:16:23.138783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.413 [2024-07-26 01:16:23.138798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.413 [2024-07-26 01:16:23.138814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:32448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.413 [2024-07-26 01:16:23.138829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.413 [2024-07-26 01:16:23.138846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:31744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.413 [2024-07-26 01:16:23.138861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.413 [2024-07-26 01:16:23.138877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:31752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.413 [2024-07-26 01:16:23.138892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.413 [2024-07-26 01:16:23.138914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:31760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.413 [2024-07-26 01:16:23.138930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.413 [2024-07-26 01:16:23.138947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.413 [2024-07-26 01:16:23.138962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.413 [2024-07-26 01:16:23.138979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.413 [2024-07-26 01:16:23.138994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.413 [2024-07-26 01:16:23.139010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:31784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.413 [2024-07-26 01:16:23.139026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.413 [2024-07-26 01:16:23.139042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:31792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.413 [2024-07-26 01:16:23.139057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.413 [2024-07-26 01:16:23.139098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.413 [2024-07-26 01:16:23.139129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.413 [2024-07-26 01:16:23.139145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81f240 is same with the state(5) to be set 00:33:53.413 [2024-07-26 01:16:23.139161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.413 [2024-07-26 01:16:23.139172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.413 [2024-07-26 01:16:23.139184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31808 len:8 PRP1 0x0 PRP2 0x0 00:33:53.413 [2024-07-26 01:16:23.139197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.413 [2024-07-26 01:16:23.139256] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x81f240 was disconnected and freed. reset controller. 00:33:53.413 [2024-07-26 01:16:23.143090] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.413 [2024-07-26 01:16:23.143179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.413 [2024-07-26 01:16:23.143986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.413 [2024-07-26 01:16:23.144038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.413 [2024-07-26 01:16:23.144056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.413 [2024-07-26 01:16:23.144303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.413 [2024-07-26 01:16:23.144569] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.413 [2024-07-26 01:16:23.144593] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.413 [2024-07-26 01:16:23.144610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.413 [2024-07-26 01:16:23.148212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.413 [2024-07-26 01:16:23.157289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.413 [2024-07-26 01:16:23.157702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.413 [2024-07-26 01:16:23.157729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.413 [2024-07-26 01:16:23.157745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.413 [2024-07-26 01:16:23.157960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.413 [2024-07-26 01:16:23.158229] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.413 [2024-07-26 01:16:23.158253] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.413 [2024-07-26 01:16:23.158269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.413 [2024-07-26 01:16:23.161836] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.413 [2024-07-26 01:16:23.171330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.413 [2024-07-26 01:16:23.171757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.413 [2024-07-26 01:16:23.171784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.413 [2024-07-26 01:16:23.171800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.413 [2024-07-26 01:16:23.172044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.413 [2024-07-26 01:16:23.172298] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.413 [2024-07-26 01:16:23.172323] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.413 [2024-07-26 01:16:23.172339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.413 [2024-07-26 01:16:23.175919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.413 [2024-07-26 01:16:23.185248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.413 [2024-07-26 01:16:23.185787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.413 [2024-07-26 01:16:23.185839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.413 [2024-07-26 01:16:23.185857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.413 [2024-07-26 01:16:23.186104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.413 [2024-07-26 01:16:23.186346] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.413 [2024-07-26 01:16:23.186371] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.413 [2024-07-26 01:16:23.186392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.413 [2024-07-26 01:16:23.189962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.413 [2024-07-26 01:16:23.199260] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.413 [2024-07-26 01:16:23.199697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.413 [2024-07-26 01:16:23.199730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.413 [2024-07-26 01:16:23.199754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.413 [2024-07-26 01:16:23.199995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.413 [2024-07-26 01:16:23.200253] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.413 [2024-07-26 01:16:23.200280] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.413 [2024-07-26 01:16:23.200296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.413 [2024-07-26 01:16:23.203870] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.413 [2024-07-26 01:16:23.213156] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.413 [2024-07-26 01:16:23.213577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.413 [2024-07-26 01:16:23.213609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.413 [2024-07-26 01:16:23.213627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.413 [2024-07-26 01:16:23.213866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.413 [2024-07-26 01:16:23.214125] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.413 [2024-07-26 01:16:23.214151] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.413 [2024-07-26 01:16:23.214168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.413 [2024-07-26 01:16:23.217739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.413 [2024-07-26 01:16:23.227023] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.413 [2024-07-26 01:16:23.227439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.413 [2024-07-26 01:16:23.227467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.413 [2024-07-26 01:16:23.227482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.413 [2024-07-26 01:16:23.227712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.413 [2024-07-26 01:16:23.227956] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.413 [2024-07-26 01:16:23.227981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.413 [2024-07-26 01:16:23.227997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.413 [2024-07-26 01:16:23.231583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.413 [2024-07-26 01:16:23.240873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.413 [2024-07-26 01:16:23.241296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.413 [2024-07-26 01:16:23.241329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.413 [2024-07-26 01:16:23.241348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.413 [2024-07-26 01:16:23.241587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.413 [2024-07-26 01:16:23.241831] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.413 [2024-07-26 01:16:23.241862] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.413 [2024-07-26 01:16:23.241879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.413 [2024-07-26 01:16:23.245607] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.413 [2024-07-26 01:16:23.254890] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.413 [2024-07-26 01:16:23.255297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.413 [2024-07-26 01:16:23.255330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.413 [2024-07-26 01:16:23.255349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.413 [2024-07-26 01:16:23.255589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.413 [2024-07-26 01:16:23.255834] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.413 [2024-07-26 01:16:23.255860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.413 [2024-07-26 01:16:23.255876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.413 [2024-07-26 01:16:23.259456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.413 [2024-07-26 01:16:23.268730] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.413 [2024-07-26 01:16:23.269153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.413 [2024-07-26 01:16:23.269182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.413 [2024-07-26 01:16:23.269198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.413 [2024-07-26 01:16:23.269432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.413 [2024-07-26 01:16:23.269682] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.413 [2024-07-26 01:16:23.269708] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.413 [2024-07-26 01:16:23.269724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.413 [2024-07-26 01:16:23.273313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.413 [2024-07-26 01:16:23.282606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.413 [2024-07-26 01:16:23.283018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.413 [2024-07-26 01:16:23.283049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.413 [2024-07-26 01:16:23.283076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.413 [2024-07-26 01:16:23.283316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.413 [2024-07-26 01:16:23.283560] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.413 [2024-07-26 01:16:23.283585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.413 [2024-07-26 01:16:23.283601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.413 [2024-07-26 01:16:23.287185] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.413 [2024-07-26 01:16:23.296474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.413 [2024-07-26 01:16:23.296892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.413 [2024-07-26 01:16:23.296924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.413 [2024-07-26 01:16:23.296942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.413 [2024-07-26 01:16:23.297192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.413 [2024-07-26 01:16:23.297436] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.413 [2024-07-26 01:16:23.297460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.413 [2024-07-26 01:16:23.297477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.414 [2024-07-26 01:16:23.301052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.414 [2024-07-26 01:16:23.310339] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.414 [2024-07-26 01:16:23.310749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.414 [2024-07-26 01:16:23.310781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.414 [2024-07-26 01:16:23.310799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.414 [2024-07-26 01:16:23.311037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.414 [2024-07-26 01:16:23.311289] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.414 [2024-07-26 01:16:23.311314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.414 [2024-07-26 01:16:23.311330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.414 [2024-07-26 01:16:23.314900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.414 [2024-07-26 01:16:23.324194] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.414 [2024-07-26 01:16:23.324596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.414 [2024-07-26 01:16:23.324629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.414 [2024-07-26 01:16:23.324647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.414 [2024-07-26 01:16:23.324886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.414 [2024-07-26 01:16:23.325142] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.414 [2024-07-26 01:16:23.325167] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.414 [2024-07-26 01:16:23.325184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.414 [2024-07-26 01:16:23.328765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.414 [2024-07-26 01:16:23.338051] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.414 [2024-07-26 01:16:23.338471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.414 [2024-07-26 01:16:23.338504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.414 [2024-07-26 01:16:23.338523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.414 [2024-07-26 01:16:23.338770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.414 [2024-07-26 01:16:23.339015] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.414 [2024-07-26 01:16:23.339041] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.414 [2024-07-26 01:16:23.339068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.414 [2024-07-26 01:16:23.342661] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.414 [2024-07-26 01:16:23.351934] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.414 [2024-07-26 01:16:23.352375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.414 [2024-07-26 01:16:23.352407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.414 [2024-07-26 01:16:23.352425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.414 [2024-07-26 01:16:23.352663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.414 [2024-07-26 01:16:23.352905] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.414 [2024-07-26 01:16:23.352931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.414 [2024-07-26 01:16:23.352947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.414 [2024-07-26 01:16:23.356524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.414 [2024-07-26 01:16:23.365816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.414 [2024-07-26 01:16:23.366218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.414 [2024-07-26 01:16:23.366250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.414 [2024-07-26 01:16:23.366268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.414 [2024-07-26 01:16:23.366507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.414 [2024-07-26 01:16:23.366750] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.414 [2024-07-26 01:16:23.366775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.414 [2024-07-26 01:16:23.366791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.414 [2024-07-26 01:16:23.370370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.414 [2024-07-26 01:16:23.379860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.414 [2024-07-26 01:16:23.380295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.414 [2024-07-26 01:16:23.380327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.414 [2024-07-26 01:16:23.380345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.414 [2024-07-26 01:16:23.380584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.414 [2024-07-26 01:16:23.380828] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.414 [2024-07-26 01:16:23.380852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.414 [2024-07-26 01:16:23.380874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.414 [2024-07-26 01:16:23.384455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.414 [2024-07-26 01:16:23.393731] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.414 [2024-07-26 01:16:23.394140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.414 [2024-07-26 01:16:23.394172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.414 [2024-07-26 01:16:23.394190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.414 [2024-07-26 01:16:23.394428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.414 [2024-07-26 01:16:23.394672] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.414 [2024-07-26 01:16:23.394697] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.414 [2024-07-26 01:16:23.394713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.414 [2024-07-26 01:16:23.398295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.414 [2024-07-26 01:16:23.407579] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.414 [2024-07-26 01:16:23.407981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.414 [2024-07-26 01:16:23.408014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.414 [2024-07-26 01:16:23.408032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.414 [2024-07-26 01:16:23.408280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.414 [2024-07-26 01:16:23.408524] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.414 [2024-07-26 01:16:23.408550] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.414 [2024-07-26 01:16:23.408567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.414 [2024-07-26 01:16:23.412145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.414 [2024-07-26 01:16:23.421468] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.414 [2024-07-26 01:16:23.421894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.414 [2024-07-26 01:16:23.421926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.414 [2024-07-26 01:16:23.421944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.414 [2024-07-26 01:16:23.422192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.414 [2024-07-26 01:16:23.422434] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.414 [2024-07-26 01:16:23.422460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.414 [2024-07-26 01:16:23.422476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.414 [2024-07-26 01:16:23.426051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.414 [2024-07-26 01:16:23.435361] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.414 [2024-07-26 01:16:23.435879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.414 [2024-07-26 01:16:23.435932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.414 [2024-07-26 01:16:23.435951] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.414 [2024-07-26 01:16:23.436203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.414 [2024-07-26 01:16:23.436447] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.414 [2024-07-26 01:16:23.436473] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.414 [2024-07-26 01:16:23.436489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.414 [2024-07-26 01:16:23.440072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.414 [2024-07-26 01:16:23.449363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.414 [2024-07-26 01:16:23.449896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.414 [2024-07-26 01:16:23.449954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.414 [2024-07-26 01:16:23.449973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.414 [2024-07-26 01:16:23.450223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.414 [2024-07-26 01:16:23.450467] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.414 [2024-07-26 01:16:23.450493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.414 [2024-07-26 01:16:23.450510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.414 [2024-07-26 01:16:23.454093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.414 [2024-07-26 01:16:23.463407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.414 [2024-07-26 01:16:23.463799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.414 [2024-07-26 01:16:23.463831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.414 [2024-07-26 01:16:23.463850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.414 [2024-07-26 01:16:23.464100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.414 [2024-07-26 01:16:23.464345] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.414 [2024-07-26 01:16:23.464371] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.414 [2024-07-26 01:16:23.464387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.414 [2024-07-26 01:16:23.467965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.414 [2024-07-26 01:16:23.477265] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.414 [2024-07-26 01:16:23.477689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.414 [2024-07-26 01:16:23.477720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.414 [2024-07-26 01:16:23.477738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.414 [2024-07-26 01:16:23.477976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.414 [2024-07-26 01:16:23.478236] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.414 [2024-07-26 01:16:23.478263] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.414 [2024-07-26 01:16:23.478280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.414 [2024-07-26 01:16:23.481858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.414 [2024-07-26 01:16:23.491148] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.414 [2024-07-26 01:16:23.491558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.414 [2024-07-26 01:16:23.491590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.414 [2024-07-26 01:16:23.491609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.414 [2024-07-26 01:16:23.491847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.414 [2024-07-26 01:16:23.492102] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.414 [2024-07-26 01:16:23.492127] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.414 [2024-07-26 01:16:23.492143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.414 [2024-07-26 01:16:23.495711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.414 [2024-07-26 01:16:23.505010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.414 [2024-07-26 01:16:23.505432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.414 [2024-07-26 01:16:23.505464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.414 [2024-07-26 01:16:23.505482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.414 [2024-07-26 01:16:23.505721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.414 [2024-07-26 01:16:23.505964] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.414 [2024-07-26 01:16:23.505989] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.414 [2024-07-26 01:16:23.506006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.414 [2024-07-26 01:16:23.509591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.414 [2024-07-26 01:16:23.518868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.414 [2024-07-26 01:16:23.519296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.414 [2024-07-26 01:16:23.519328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.414 [2024-07-26 01:16:23.519346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.414 [2024-07-26 01:16:23.519584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.414 [2024-07-26 01:16:23.519827] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.414 [2024-07-26 01:16:23.519853] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.414 [2024-07-26 01:16:23.519869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.414 [2024-07-26 01:16:23.523464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.414 [2024-07-26 01:16:23.532752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.414 [2024-07-26 01:16:23.533176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.414 [2024-07-26 01:16:23.533208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.414 [2024-07-26 01:16:23.533227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.414 [2024-07-26 01:16:23.533466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.414 [2024-07-26 01:16:23.533710] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.414 [2024-07-26 01:16:23.533736] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.414 [2024-07-26 01:16:23.533753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.414 [2024-07-26 01:16:23.537337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.414 [2024-07-26 01:16:23.546617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.414 [2024-07-26 01:16:23.547040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.414 [2024-07-26 01:16:23.547082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.414 [2024-07-26 01:16:23.547103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.414 [2024-07-26 01:16:23.547342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.414 [2024-07-26 01:16:23.547586] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.414 [2024-07-26 01:16:23.547612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.414 [2024-07-26 01:16:23.547628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.414 [2024-07-26 01:16:23.551211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.414 [2024-07-26 01:16:23.560492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.414 [2024-07-26 01:16:23.560914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.414 [2024-07-26 01:16:23.560945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.414 [2024-07-26 01:16:23.560963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.414 [2024-07-26 01:16:23.561215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.414 [2024-07-26 01:16:23.561459] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.414 [2024-07-26 01:16:23.561484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.414 [2024-07-26 01:16:23.561499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.414 [2024-07-26 01:16:23.565081] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.414 [2024-07-26 01:16:23.574359] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.415 [2024-07-26 01:16:23.574774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.415 [2024-07-26 01:16:23.574806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.415 [2024-07-26 01:16:23.574830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.415 [2024-07-26 01:16:23.575082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.415 [2024-07-26 01:16:23.575325] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.415 [2024-07-26 01:16:23.575351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.415 [2024-07-26 01:16:23.575368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.415 [2024-07-26 01:16:23.578940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.415 [2024-07-26 01:16:23.588227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.415 [2024-07-26 01:16:23.588648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.415 [2024-07-26 01:16:23.588679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.415 [2024-07-26 01:16:23.588697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.415 [2024-07-26 01:16:23.588935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.415 [2024-07-26 01:16:23.589191] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.415 [2024-07-26 01:16:23.589217] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.415 [2024-07-26 01:16:23.589234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.415 [2024-07-26 01:16:23.592809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.415 [2024-07-26 01:16:23.602097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.415 [2024-07-26 01:16:23.602522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.415 [2024-07-26 01:16:23.602554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.415 [2024-07-26 01:16:23.602571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.415 [2024-07-26 01:16:23.602810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.415 [2024-07-26 01:16:23.603052] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.415 [2024-07-26 01:16:23.603090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.415 [2024-07-26 01:16:23.603107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.415 [2024-07-26 01:16:23.606677] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.415 [2024-07-26 01:16:23.615958] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.415 [2024-07-26 01:16:23.616392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.415 [2024-07-26 01:16:23.616424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.415 [2024-07-26 01:16:23.616443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.415 [2024-07-26 01:16:23.616681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.415 [2024-07-26 01:16:23.616924] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.415 [2024-07-26 01:16:23.616955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.415 [2024-07-26 01:16:23.616971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.415 [2024-07-26 01:16:23.620558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.415 [2024-07-26 01:16:23.629876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.415 [2024-07-26 01:16:23.630281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.415 [2024-07-26 01:16:23.630316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.415 [2024-07-26 01:16:23.630335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.415 [2024-07-26 01:16:23.630574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.415 [2024-07-26 01:16:23.630819] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.415 [2024-07-26 01:16:23.630844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.415 [2024-07-26 01:16:23.630860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.415 [2024-07-26 01:16:23.634442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.415 [2024-07-26 01:16:23.643714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.415 [2024-07-26 01:16:23.644132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.415 [2024-07-26 01:16:23.644164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.415 [2024-07-26 01:16:23.644182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.415 [2024-07-26 01:16:23.644421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.415 [2024-07-26 01:16:23.644664] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.415 [2024-07-26 01:16:23.644690] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.415 [2024-07-26 01:16:23.644706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.415 [2024-07-26 01:16:23.648297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.415 [2024-07-26 01:16:23.657593] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.415 [2024-07-26 01:16:23.657990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.415 [2024-07-26 01:16:23.658023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.415 [2024-07-26 01:16:23.658041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.415 [2024-07-26 01:16:23.658291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.415 [2024-07-26 01:16:23.658534] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.415 [2024-07-26 01:16:23.658559] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.415 [2024-07-26 01:16:23.658576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.415 [2024-07-26 01:16:23.662159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.415 [2024-07-26 01:16:23.671466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.415 [2024-07-26 01:16:23.671990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.415 [2024-07-26 01:16:23.672042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.415 [2024-07-26 01:16:23.672069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.415 [2024-07-26 01:16:23.672311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.415 [2024-07-26 01:16:23.672553] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.415 [2024-07-26 01:16:23.672579] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.415 [2024-07-26 01:16:23.672595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.415 [2024-07-26 01:16:23.676190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.415 [2024-07-26 01:16:23.685473] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.415 [2024-07-26 01:16:23.685886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.415 [2024-07-26 01:16:23.685919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.415 [2024-07-26 01:16:23.685937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.415 [2024-07-26 01:16:23.686188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.415 [2024-07-26 01:16:23.686440] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.415 [2024-07-26 01:16:23.686465] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.415 [2024-07-26 01:16:23.686482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.415 [2024-07-26 01:16:23.690055] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.415 [2024-07-26 01:16:23.699340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.415 [2024-07-26 01:16:23.699751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.415 [2024-07-26 01:16:23.699783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.415 [2024-07-26 01:16:23.699801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.415 [2024-07-26 01:16:23.700039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.415 [2024-07-26 01:16:23.700295] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.415 [2024-07-26 01:16:23.700322] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.415 [2024-07-26 01:16:23.700338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.415 [2024-07-26 01:16:23.703909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.415 [2024-07-26 01:16:23.713200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.415 [2024-07-26 01:16:23.713588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.415 [2024-07-26 01:16:23.713620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.415 [2024-07-26 01:16:23.713639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.415 [2024-07-26 01:16:23.713883] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.415 [2024-07-26 01:16:23.714140] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.415 [2024-07-26 01:16:23.714165] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.415 [2024-07-26 01:16:23.714181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.415 [2024-07-26 01:16:23.717754] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.415 [2024-07-26 01:16:23.727035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.415 [2024-07-26 01:16:23.727454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.415 [2024-07-26 01:16:23.727487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.415 [2024-07-26 01:16:23.727506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.415 [2024-07-26 01:16:23.727745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.415 [2024-07-26 01:16:23.727999] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.415 [2024-07-26 01:16:23.728026] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.415 [2024-07-26 01:16:23.728042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.415 [2024-07-26 01:16:23.731628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.415 [2024-07-26 01:16:23.740903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.415 [2024-07-26 01:16:23.741311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.415 [2024-07-26 01:16:23.741343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.415 [2024-07-26 01:16:23.741361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.415 [2024-07-26 01:16:23.741599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.415 [2024-07-26 01:16:23.741841] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.415 [2024-07-26 01:16:23.741866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.415 [2024-07-26 01:16:23.741882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.415 [2024-07-26 01:16:23.745466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.415 [2024-07-26 01:16:23.754754] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.415 [2024-07-26 01:16:23.755171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.415 [2024-07-26 01:16:23.755204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.415 [2024-07-26 01:16:23.755223] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.415 [2024-07-26 01:16:23.755463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.415 [2024-07-26 01:16:23.755708] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.415 [2024-07-26 01:16:23.755734] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.415 [2024-07-26 01:16:23.755755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.415 [2024-07-26 01:16:23.759341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.415 [2024-07-26 01:16:23.768618] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.415 [2024-07-26 01:16:23.769025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.415 [2024-07-26 01:16:23.769057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.415 [2024-07-26 01:16:23.769088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.415 [2024-07-26 01:16:23.769328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.415 [2024-07-26 01:16:23.769571] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.415 [2024-07-26 01:16:23.769596] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.415 [2024-07-26 01:16:23.769613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.415 [2024-07-26 01:16:23.773199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.415 [2024-07-26 01:16:23.782476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.415 [2024-07-26 01:16:23.782887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.415 [2024-07-26 01:16:23.782919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.415 [2024-07-26 01:16:23.782937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.415 [2024-07-26 01:16:23.783190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.415 [2024-07-26 01:16:23.783433] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.415 [2024-07-26 01:16:23.783458] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.415 [2024-07-26 01:16:23.783474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.415 [2024-07-26 01:16:23.787047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.415 [2024-07-26 01:16:23.796334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.415 [2024-07-26 01:16:23.796756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.415 [2024-07-26 01:16:23.796788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.415 [2024-07-26 01:16:23.796807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.415 [2024-07-26 01:16:23.797046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.415 [2024-07-26 01:16:23.797303] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.415 [2024-07-26 01:16:23.797329] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.415 [2024-07-26 01:16:23.797345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.415 [2024-07-26 01:16:23.800917] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.415 [2024-07-26 01:16:23.810204] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.415 [2024-07-26 01:16:23.810597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.415 [2024-07-26 01:16:23.810629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.415 [2024-07-26 01:16:23.810647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.415 [2024-07-26 01:16:23.810886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.415 [2024-07-26 01:16:23.811142] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.415 [2024-07-26 01:16:23.811169] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.415 [2024-07-26 01:16:23.811186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.415 [2024-07-26 01:16:23.814758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.415 [2024-07-26 01:16:23.824052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.415 [2024-07-26 01:16:23.824472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.415 [2024-07-26 01:16:23.824504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.415 [2024-07-26 01:16:23.824522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.415 [2024-07-26 01:16:23.824761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.415 [2024-07-26 01:16:23.825004] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.415 [2024-07-26 01:16:23.825029] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.415 [2024-07-26 01:16:23.825045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.415 [2024-07-26 01:16:23.828645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.674 [2024-07-26 01:16:23.837933] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.674 [2024-07-26 01:16:23.838362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.674 [2024-07-26 01:16:23.838394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.674 [2024-07-26 01:16:23.838412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.674 [2024-07-26 01:16:23.838650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.674 [2024-07-26 01:16:23.838892] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.674 [2024-07-26 01:16:23.838918] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.674 [2024-07-26 01:16:23.838934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.674 [2024-07-26 01:16:23.842523] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.674 [2024-07-26 01:16:23.851813] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.674 [2024-07-26 01:16:23.852225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.674 [2024-07-26 01:16:23.852256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.674 [2024-07-26 01:16:23.852274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.674 [2024-07-26 01:16:23.852518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.674 [2024-07-26 01:16:23.852761] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.674 [2024-07-26 01:16:23.852787] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.674 [2024-07-26 01:16:23.852803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.674 [2024-07-26 01:16:23.856384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.674 [2024-07-26 01:16:23.865672] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.674 [2024-07-26 01:16:23.866084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.674 [2024-07-26 01:16:23.866117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.674 [2024-07-26 01:16:23.866135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.674 [2024-07-26 01:16:23.866374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.674 [2024-07-26 01:16:23.866616] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.674 [2024-07-26 01:16:23.866642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.674 [2024-07-26 01:16:23.866657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.674 [2024-07-26 01:16:23.870242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.674 [2024-07-26 01:16:23.879541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.674 [2024-07-26 01:16:23.879970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.674 [2024-07-26 01:16:23.880004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.674 [2024-07-26 01:16:23.880022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.674 [2024-07-26 01:16:23.880274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.674 [2024-07-26 01:16:23.880519] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.674 [2024-07-26 01:16:23.880545] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.674 [2024-07-26 01:16:23.880562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.674 [2024-07-26 01:16:23.884144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.674 [2024-07-26 01:16:23.893426] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.674 [2024-07-26 01:16:23.893812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.674 [2024-07-26 01:16:23.893844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.674 [2024-07-26 01:16:23.893862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.674 [2024-07-26 01:16:23.894114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.674 [2024-07-26 01:16:23.894357] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.674 [2024-07-26 01:16:23.894383] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.674 [2024-07-26 01:16:23.894399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.674 [2024-07-26 01:16:23.897982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.674 [2024-07-26 01:16:23.907280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.674 [2024-07-26 01:16:23.907668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.674 [2024-07-26 01:16:23.907700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.674 [2024-07-26 01:16:23.907719] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.674 [2024-07-26 01:16:23.907958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.674 [2024-07-26 01:16:23.908214] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.674 [2024-07-26 01:16:23.908241] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.674 [2024-07-26 01:16:23.908257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.674 [2024-07-26 01:16:23.911831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.674 [2024-07-26 01:16:23.921123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.674 [2024-07-26 01:16:23.921532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.674 [2024-07-26 01:16:23.921564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.674 [2024-07-26 01:16:23.921582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.674 [2024-07-26 01:16:23.921820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.674 [2024-07-26 01:16:23.922077] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.674 [2024-07-26 01:16:23.922104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.674 [2024-07-26 01:16:23.922121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.674 [2024-07-26 01:16:23.925694] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.674 [2024-07-26 01:16:23.934983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.674 [2024-07-26 01:16:23.935406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.674 [2024-07-26 01:16:23.935438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.674 [2024-07-26 01:16:23.935456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.674 [2024-07-26 01:16:23.935694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.674 [2024-07-26 01:16:23.935938] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.674 [2024-07-26 01:16:23.935964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.674 [2024-07-26 01:16:23.935980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.674 [2024-07-26 01:16:23.939566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.674 [2024-07-26 01:16:23.948863] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.674 [2024-07-26 01:16:23.949260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.674 [2024-07-26 01:16:23.949292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.674 [2024-07-26 01:16:23.949316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.674 [2024-07-26 01:16:23.949554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.674 [2024-07-26 01:16:23.949797] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.674 [2024-07-26 01:16:23.949823] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.674 [2024-07-26 01:16:23.949840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.674 [2024-07-26 01:16:23.953426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.674 [2024-07-26 01:16:23.962701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.674 [2024-07-26 01:16:23.963113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.674 [2024-07-26 01:16:23.963146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.674 [2024-07-26 01:16:23.963165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.674 [2024-07-26 01:16:23.963404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.674 [2024-07-26 01:16:23.963648] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.674 [2024-07-26 01:16:23.963674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.674 [2024-07-26 01:16:23.963690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.674 [2024-07-26 01:16:23.967274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.674 [2024-07-26 01:16:23.976558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.674 [2024-07-26 01:16:23.976983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.674 [2024-07-26 01:16:23.977015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.674 [2024-07-26 01:16:23.977033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.674 [2024-07-26 01:16:23.977283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.674 [2024-07-26 01:16:23.977526] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.674 [2024-07-26 01:16:23.977551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.674 [2024-07-26 01:16:23.977567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.674 [2024-07-26 01:16:23.981145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.674 [2024-07-26 01:16:23.990421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.674 [2024-07-26 01:16:23.990827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.674 [2024-07-26 01:16:23.990858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.674 [2024-07-26 01:16:23.990877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.674 [2024-07-26 01:16:23.991128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.674 [2024-07-26 01:16:23.991372] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.674 [2024-07-26 01:16:23.991403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.674 [2024-07-26 01:16:23.991420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.674 [2024-07-26 01:16:23.994997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.674 [2024-07-26 01:16:24.004286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.674 [2024-07-26 01:16:24.004674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.674 [2024-07-26 01:16:24.004706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.674 [2024-07-26 01:16:24.004725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.675 [2024-07-26 01:16:24.004962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.675 [2024-07-26 01:16:24.005219] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.675 [2024-07-26 01:16:24.005246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.675 [2024-07-26 01:16:24.005263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.675 [2024-07-26 01:16:24.008836] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.675 [2024-07-26 01:16:24.018125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.675 [2024-07-26 01:16:24.018542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.675 [2024-07-26 01:16:24.018574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.675 [2024-07-26 01:16:24.018592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.675 [2024-07-26 01:16:24.018830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.675 [2024-07-26 01:16:24.019085] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.675 [2024-07-26 01:16:24.019111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.675 [2024-07-26 01:16:24.019128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.675 [2024-07-26 01:16:24.022699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.675 [2024-07-26 01:16:24.031988] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.675 [2024-07-26 01:16:24.032407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.675 [2024-07-26 01:16:24.032440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.675 [2024-07-26 01:16:24.032458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.675 [2024-07-26 01:16:24.032696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.675 [2024-07-26 01:16:24.032939] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.675 [2024-07-26 01:16:24.032964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.675 [2024-07-26 01:16:24.032980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.675 [2024-07-26 01:16:24.036565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.675 [2024-07-26 01:16:24.045849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.675 [2024-07-26 01:16:24.046246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.675 [2024-07-26 01:16:24.046279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.675 [2024-07-26 01:16:24.046297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.675 [2024-07-26 01:16:24.046536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.675 [2024-07-26 01:16:24.046780] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.675 [2024-07-26 01:16:24.046806] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.675 [2024-07-26 01:16:24.046822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.675 [2024-07-26 01:16:24.050405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.675 [2024-07-26 01:16:24.059891] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.675 [2024-07-26 01:16:24.060331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.675 [2024-07-26 01:16:24.060364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.675 [2024-07-26 01:16:24.060382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.675 [2024-07-26 01:16:24.060621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.675 [2024-07-26 01:16:24.060865] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.675 [2024-07-26 01:16:24.060891] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.675 [2024-07-26 01:16:24.060907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.675 [2024-07-26 01:16:24.064492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.675 [2024-07-26 01:16:24.073774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.675 [2024-07-26 01:16:24.074194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.675 [2024-07-26 01:16:24.074226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.675 [2024-07-26 01:16:24.074244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.675 [2024-07-26 01:16:24.074482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.675 [2024-07-26 01:16:24.074724] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.675 [2024-07-26 01:16:24.074750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.675 [2024-07-26 01:16:24.074765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.675 [2024-07-26 01:16:24.078353] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.675 [2024-07-26 01:16:24.087641] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.675 [2024-07-26 01:16:24.088036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.675 [2024-07-26 01:16:24.088077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.675 [2024-07-26 01:16:24.088104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.675 [2024-07-26 01:16:24.088344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.675 [2024-07-26 01:16:24.088588] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.675 [2024-07-26 01:16:24.088613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.675 [2024-07-26 01:16:24.088630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.675 [2024-07-26 01:16:24.092211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.933 [2024-07-26 01:16:24.101491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.933 [2024-07-26 01:16:24.101901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-07-26 01:16:24.101932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.933 [2024-07-26 01:16:24.101950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.933 [2024-07-26 01:16:24.102201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.933 [2024-07-26 01:16:24.102444] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.933 [2024-07-26 01:16:24.102469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.933 [2024-07-26 01:16:24.102486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.933 [2024-07-26 01:16:24.106065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.933 [2024-07-26 01:16:24.115340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.933 [2024-07-26 01:16:24.115748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-07-26 01:16:24.115780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.933 [2024-07-26 01:16:24.115798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.933 [2024-07-26 01:16:24.116036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.933 [2024-07-26 01:16:24.116291] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.933 [2024-07-26 01:16:24.116317] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.933 [2024-07-26 01:16:24.116334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.933 [2024-07-26 01:16:24.119909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.933 [2024-07-26 01:16:24.129201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.933 [2024-07-26 01:16:24.129629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-07-26 01:16:24.129661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.933 [2024-07-26 01:16:24.129679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.933 [2024-07-26 01:16:24.129917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.933 [2024-07-26 01:16:24.130175] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.933 [2024-07-26 01:16:24.130202] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.933 [2024-07-26 01:16:24.130223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.933 [2024-07-26 01:16:24.133797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.933 [2024-07-26 01:16:24.143084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.933 [2024-07-26 01:16:24.143481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-07-26 01:16:24.143512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.933 [2024-07-26 01:16:24.143532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.933 [2024-07-26 01:16:24.143771] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.933 [2024-07-26 01:16:24.144015] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.933 [2024-07-26 01:16:24.144039] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.933 [2024-07-26 01:16:24.144055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.933 [2024-07-26 01:16:24.147639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.933 [2024-07-26 01:16:24.157138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.933 [2024-07-26 01:16:24.157557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-07-26 01:16:24.157588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.933 [2024-07-26 01:16:24.157606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.933 [2024-07-26 01:16:24.157845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.933 [2024-07-26 01:16:24.158174] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.933 [2024-07-26 01:16:24.158199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.933 [2024-07-26 01:16:24.158215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.933 [2024-07-26 01:16:24.161786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.933 [2024-07-26 01:16:24.171087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.933 [2024-07-26 01:16:24.171511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-07-26 01:16:24.171544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.933 [2024-07-26 01:16:24.171563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.933 [2024-07-26 01:16:24.171801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.933 [2024-07-26 01:16:24.172044] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.933 [2024-07-26 01:16:24.172079] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.933 [2024-07-26 01:16:24.172107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.933 [2024-07-26 01:16:24.175683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.933 [2024-07-26 01:16:24.184968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.933 [2024-07-26 01:16:24.185348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.933 [2024-07-26 01:16:24.185380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.933 [2024-07-26 01:16:24.185398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.933 [2024-07-26 01:16:24.185646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.933 [2024-07-26 01:16:24.185898] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.933 [2024-07-26 01:16:24.185923] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.933 [2024-07-26 01:16:24.185938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.933 [2024-07-26 01:16:24.189521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.934 [2024-07-26 01:16:24.199019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.934 [2024-07-26 01:16:24.199426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-07-26 01:16:24.199458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.934 [2024-07-26 01:16:24.199475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.934 [2024-07-26 01:16:24.199714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.934 [2024-07-26 01:16:24.199957] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.934 [2024-07-26 01:16:24.199982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.934 [2024-07-26 01:16:24.199998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.934 [2024-07-26 01:16:24.203580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.934 [2024-07-26 01:16:24.212873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.934 [2024-07-26 01:16:24.213269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-07-26 01:16:24.213301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.934 [2024-07-26 01:16:24.213319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.934 [2024-07-26 01:16:24.213562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.934 [2024-07-26 01:16:24.213805] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.934 [2024-07-26 01:16:24.213830] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.934 [2024-07-26 01:16:24.213846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.934 [2024-07-26 01:16:24.217432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.934 [2024-07-26 01:16:24.226716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.934 [2024-07-26 01:16:24.227119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-07-26 01:16:24.227151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.934 [2024-07-26 01:16:24.227169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.934 [2024-07-26 01:16:24.227413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.934 [2024-07-26 01:16:24.227656] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.934 [2024-07-26 01:16:24.227681] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.934 [2024-07-26 01:16:24.227697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.934 [2024-07-26 01:16:24.231291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.934 [2024-07-26 01:16:24.240570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.934 [2024-07-26 01:16:24.240993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-07-26 01:16:24.241025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.934 [2024-07-26 01:16:24.241043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.934 [2024-07-26 01:16:24.241289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.934 [2024-07-26 01:16:24.241543] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.934 [2024-07-26 01:16:24.241568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.934 [2024-07-26 01:16:24.241584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.934 [2024-07-26 01:16:24.245166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.934 [2024-07-26 01:16:24.254457] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.934 [2024-07-26 01:16:24.254881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-07-26 01:16:24.254913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.934 [2024-07-26 01:16:24.254931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.934 [2024-07-26 01:16:24.255179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.934 [2024-07-26 01:16:24.255422] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.934 [2024-07-26 01:16:24.255447] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.934 [2024-07-26 01:16:24.255463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.934 [2024-07-26 01:16:24.259041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.934 [2024-07-26 01:16:24.268419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.934 [2024-07-26 01:16:24.268833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-07-26 01:16:24.268865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.934 [2024-07-26 01:16:24.268883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.934 [2024-07-26 01:16:24.269134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.934 [2024-07-26 01:16:24.269378] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.934 [2024-07-26 01:16:24.269404] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.934 [2024-07-26 01:16:24.269420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.934 [2024-07-26 01:16:24.273000] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.934 [2024-07-26 01:16:24.282307] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.934 [2024-07-26 01:16:24.282725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-07-26 01:16:24.282757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.934 [2024-07-26 01:16:24.282775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.934 [2024-07-26 01:16:24.283013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.934 [2024-07-26 01:16:24.283270] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.934 [2024-07-26 01:16:24.283306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.934 [2024-07-26 01:16:24.283323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.934 [2024-07-26 01:16:24.286900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.934 [2024-07-26 01:16:24.296197] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.934 [2024-07-26 01:16:24.296584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-07-26 01:16:24.296616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.934 [2024-07-26 01:16:24.296634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.934 [2024-07-26 01:16:24.296873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.934 [2024-07-26 01:16:24.297129] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.934 [2024-07-26 01:16:24.297154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.934 [2024-07-26 01:16:24.297171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.934 [2024-07-26 01:16:24.300749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.934 [2024-07-26 01:16:24.310047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.934 [2024-07-26 01:16:24.310447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-07-26 01:16:24.310479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.934 [2024-07-26 01:16:24.310497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.934 [2024-07-26 01:16:24.310735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.934 [2024-07-26 01:16:24.310978] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.934 [2024-07-26 01:16:24.311003] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.934 [2024-07-26 01:16:24.311020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.934 [2024-07-26 01:16:24.314603] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.934 [2024-07-26 01:16:24.323890] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.934 [2024-07-26 01:16:24.324330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.934 [2024-07-26 01:16:24.324367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.934 [2024-07-26 01:16:24.324387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.934 [2024-07-26 01:16:24.324625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.934 [2024-07-26 01:16:24.324867] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.934 [2024-07-26 01:16:24.324893] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.934 [2024-07-26 01:16:24.324909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.934 [2024-07-26 01:16:24.328495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.934 [2024-07-26 01:16:24.337800] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.935 [2024-07-26 01:16:24.338246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-07-26 01:16:24.338278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.935 [2024-07-26 01:16:24.338297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.935 [2024-07-26 01:16:24.338536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.935 [2024-07-26 01:16:24.338778] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.935 [2024-07-26 01:16:24.338803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.935 [2024-07-26 01:16:24.338819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.935 [2024-07-26 01:16:24.342404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.935 [2024-07-26 01:16:24.351680] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.935 [2024-07-26 01:16:24.352093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.935 [2024-07-26 01:16:24.352125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:53.935 [2024-07-26 01:16:24.352143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:53.935 [2024-07-26 01:16:24.352381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:53.935 [2024-07-26 01:16:24.352624] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.935 [2024-07-26 01:16:24.352648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.935 [2024-07-26 01:16:24.352664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.935 [2024-07-26 01:16:24.356246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.194 [2024-07-26 01:16:24.365515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.194 [2024-07-26 01:16:24.365936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.194 [2024-07-26 01:16:24.365968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.194 [2024-07-26 01:16:24.365986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.194 [2024-07-26 01:16:24.366235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.194 [2024-07-26 01:16:24.366484] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.194 [2024-07-26 01:16:24.366510] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.194 [2024-07-26 01:16:24.366527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.194 [2024-07-26 01:16:24.370103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.194 [2024-07-26 01:16:24.379379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.194 [2024-07-26 01:16:24.379764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.194 [2024-07-26 01:16:24.379795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.194 [2024-07-26 01:16:24.379813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.194 [2024-07-26 01:16:24.380050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.194 [2024-07-26 01:16:24.380305] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.194 [2024-07-26 01:16:24.380331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.194 [2024-07-26 01:16:24.380347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.194 [2024-07-26 01:16:24.383915] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.194 [2024-07-26 01:16:24.393409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.194 [2024-07-26 01:16:24.393822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.194 [2024-07-26 01:16:24.393853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.194 [2024-07-26 01:16:24.393871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.194 [2024-07-26 01:16:24.394120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.194 [2024-07-26 01:16:24.394363] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.194 [2024-07-26 01:16:24.394388] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.194 [2024-07-26 01:16:24.394405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.194 [2024-07-26 01:16:24.397972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.194 [2024-07-26 01:16:24.407253] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.194 [2024-07-26 01:16:24.407678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.194 [2024-07-26 01:16:24.407710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.194 [2024-07-26 01:16:24.407729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.194 [2024-07-26 01:16:24.407967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.194 [2024-07-26 01:16:24.408228] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.194 [2024-07-26 01:16:24.408254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.194 [2024-07-26 01:16:24.408271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.194 [2024-07-26 01:16:24.411839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.194 [2024-07-26 01:16:24.421124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.194 [2024-07-26 01:16:24.421505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.194 [2024-07-26 01:16:24.421537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.194 [2024-07-26 01:16:24.421556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.194 [2024-07-26 01:16:24.421795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.194 [2024-07-26 01:16:24.422039] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.194 [2024-07-26 01:16:24.422073] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.194 [2024-07-26 01:16:24.422091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.194 [2024-07-26 01:16:24.425664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.194 [2024-07-26 01:16:24.434953] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.194 [2024-07-26 01:16:24.435375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.194 [2024-07-26 01:16:24.435407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.194 [2024-07-26 01:16:24.435425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.194 [2024-07-26 01:16:24.435663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.194 [2024-07-26 01:16:24.435906] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.194 [2024-07-26 01:16:24.435931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.194 [2024-07-26 01:16:24.435947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.194 [2024-07-26 01:16:24.439533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.194 [2024-07-26 01:16:24.448807] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.194 [2024-07-26 01:16:24.449201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.194 [2024-07-26 01:16:24.449232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.194 [2024-07-26 01:16:24.449250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.194 [2024-07-26 01:16:24.449489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.194 [2024-07-26 01:16:24.449732] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.194 [2024-07-26 01:16:24.449756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.194 [2024-07-26 01:16:24.449773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.194 [2024-07-26 01:16:24.453351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.194 [2024-07-26 01:16:24.462836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.194 [2024-07-26 01:16:24.463231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.194 [2024-07-26 01:16:24.463263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.194 [2024-07-26 01:16:24.463286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.194 [2024-07-26 01:16:24.463525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.194 [2024-07-26 01:16:24.463768] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.194 [2024-07-26 01:16:24.463792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.194 [2024-07-26 01:16:24.463808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.194 [2024-07-26 01:16:24.467386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.194 [2024-07-26 01:16:24.476870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.194 [2024-07-26 01:16:24.477301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.194 [2024-07-26 01:16:24.477333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.194 [2024-07-26 01:16:24.477351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.195 [2024-07-26 01:16:24.477590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.195 [2024-07-26 01:16:24.477833] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.195 [2024-07-26 01:16:24.477857] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.195 [2024-07-26 01:16:24.477873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.195 [2024-07-26 01:16:24.481453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.195 [2024-07-26 01:16:24.490737] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.195 [2024-07-26 01:16:24.491171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.195 [2024-07-26 01:16:24.491203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.195 [2024-07-26 01:16:24.491222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.195 [2024-07-26 01:16:24.491461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.195 [2024-07-26 01:16:24.491703] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.195 [2024-07-26 01:16:24.491729] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.195 [2024-07-26 01:16:24.491745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.195 [2024-07-26 01:16:24.495325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.195 [2024-07-26 01:16:24.504625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.195 [2024-07-26 01:16:24.505041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.195 [2024-07-26 01:16:24.505080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.195 [2024-07-26 01:16:24.505103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.195 [2024-07-26 01:16:24.505342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.195 [2024-07-26 01:16:24.505586] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.195 [2024-07-26 01:16:24.505612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.195 [2024-07-26 01:16:24.505634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.195 [2024-07-26 01:16:24.509214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.195 [2024-07-26 01:16:24.518496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.195 [2024-07-26 01:16:24.518925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.195 [2024-07-26 01:16:24.518957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.195 [2024-07-26 01:16:24.518975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.195 [2024-07-26 01:16:24.519224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.195 [2024-07-26 01:16:24.519467] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.195 [2024-07-26 01:16:24.519492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.195 [2024-07-26 01:16:24.519508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.195 [2024-07-26 01:16:24.523086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.195 [2024-07-26 01:16:24.532396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.195 [2024-07-26 01:16:24.532810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.195 [2024-07-26 01:16:24.532842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.195 [2024-07-26 01:16:24.532860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.195 [2024-07-26 01:16:24.533109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.195 [2024-07-26 01:16:24.533353] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.195 [2024-07-26 01:16:24.533377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.195 [2024-07-26 01:16:24.533393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.195 [2024-07-26 01:16:24.536964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.195 [2024-07-26 01:16:24.546244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.195 [2024-07-26 01:16:24.546637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.195 [2024-07-26 01:16:24.546669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.195 [2024-07-26 01:16:24.546687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.195 [2024-07-26 01:16:24.546925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.195 [2024-07-26 01:16:24.547178] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.195 [2024-07-26 01:16:24.547202] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.195 [2024-07-26 01:16:24.547218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.195 [2024-07-26 01:16:24.550789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.195 [2024-07-26 01:16:24.560284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.195 [2024-07-26 01:16:24.560678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.195 [2024-07-26 01:16:24.560710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.195 [2024-07-26 01:16:24.560728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.195 [2024-07-26 01:16:24.560966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.195 [2024-07-26 01:16:24.561219] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.195 [2024-07-26 01:16:24.561245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.195 [2024-07-26 01:16:24.561262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.195 [2024-07-26 01:16:24.564828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.195 [2024-07-26 01:16:24.574344] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.195 [2024-07-26 01:16:24.574732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.195 [2024-07-26 01:16:24.574764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.195 [2024-07-26 01:16:24.574782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.195 [2024-07-26 01:16:24.575020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.195 [2024-07-26 01:16:24.575272] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.195 [2024-07-26 01:16:24.575297] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.195 [2024-07-26 01:16:24.575313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.195 [2024-07-26 01:16:24.578884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.195 [2024-07-26 01:16:24.588382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.195 [2024-07-26 01:16:24.588803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.195 [2024-07-26 01:16:24.588835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.195 [2024-07-26 01:16:24.588853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.195 [2024-07-26 01:16:24.589107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.195 [2024-07-26 01:16:24.589350] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.195 [2024-07-26 01:16:24.589375] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.195 [2024-07-26 01:16:24.589391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.195 [2024-07-26 01:16:24.592961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.195 [2024-07-26 01:16:24.602242] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.195 [2024-07-26 01:16:24.602643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.195 [2024-07-26 01:16:24.602676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.195 [2024-07-26 01:16:24.602695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.195 [2024-07-26 01:16:24.602939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.195 [2024-07-26 01:16:24.603194] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.195 [2024-07-26 01:16:24.603219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.195 [2024-07-26 01:16:24.603235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.195 [2024-07-26 01:16:24.606805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.196 [2024-07-26 01:16:24.616299] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.196 [2024-07-26 01:16:24.616723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.196 [2024-07-26 01:16:24.616756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.196 [2024-07-26 01:16:24.616774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.196 [2024-07-26 01:16:24.617013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.196 [2024-07-26 01:16:24.617266] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.196 [2024-07-26 01:16:24.617292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.196 [2024-07-26 01:16:24.617308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.455 [2024-07-26 01:16:24.620886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.455 [2024-07-26 01:16:24.630164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.455 [2024-07-26 01:16:24.630558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.455 [2024-07-26 01:16:24.630590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.455 [2024-07-26 01:16:24.630608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.455 [2024-07-26 01:16:24.630846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.455 [2024-07-26 01:16:24.631098] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.455 [2024-07-26 01:16:24.631124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.455 [2024-07-26 01:16:24.631140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.455 [2024-07-26 01:16:24.634719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.455 [2024-07-26 01:16:24.643989] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.455 [2024-07-26 01:16:24.644408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.455 [2024-07-26 01:16:24.644440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.455 [2024-07-26 01:16:24.644458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.455 [2024-07-26 01:16:24.644696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.455 [2024-07-26 01:16:24.644939] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.455 [2024-07-26 01:16:24.644964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.455 [2024-07-26 01:16:24.644985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.455 [2024-07-26 01:16:24.648565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.455 [2024-07-26 01:16:24.657868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.455 [2024-07-26 01:16:24.658244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.455 [2024-07-26 01:16:24.658278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.455 [2024-07-26 01:16:24.658297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.455 [2024-07-26 01:16:24.658537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.455 [2024-07-26 01:16:24.658783] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.455 [2024-07-26 01:16:24.658808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.455 [2024-07-26 01:16:24.658825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.455 [2024-07-26 01:16:24.662402] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.455 [2024-07-26 01:16:24.671882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.455 [2024-07-26 01:16:24.672280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.455 [2024-07-26 01:16:24.672312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.455 [2024-07-26 01:16:24.672330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.455 [2024-07-26 01:16:24.672569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.455 [2024-07-26 01:16:24.672811] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.455 [2024-07-26 01:16:24.672837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.455 [2024-07-26 01:16:24.672853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.455 [2024-07-26 01:16:24.676437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.455 [2024-07-26 01:16:24.685922] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.455 [2024-07-26 01:16:24.686343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.455 [2024-07-26 01:16:24.686376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.455 [2024-07-26 01:16:24.686394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.455 [2024-07-26 01:16:24.686633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.455 [2024-07-26 01:16:24.686878] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.455 [2024-07-26 01:16:24.686903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.455 [2024-07-26 01:16:24.686919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.455 [2024-07-26 01:16:24.690501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.456 [2024-07-26 01:16:24.699770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.456 [2024-07-26 01:16:24.700205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.456 [2024-07-26 01:16:24.700241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.456 [2024-07-26 01:16:24.700260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.456 [2024-07-26 01:16:24.700498] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.456 [2024-07-26 01:16:24.700742] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.456 [2024-07-26 01:16:24.700767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.456 [2024-07-26 01:16:24.700784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.456 [2024-07-26 01:16:24.704365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.456 [2024-07-26 01:16:24.713641] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.456 [2024-07-26 01:16:24.714038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.456 [2024-07-26 01:16:24.714077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.456 [2024-07-26 01:16:24.714097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.456 [2024-07-26 01:16:24.714336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.456 [2024-07-26 01:16:24.714578] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.456 [2024-07-26 01:16:24.714604] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.456 [2024-07-26 01:16:24.714619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.456 [2024-07-26 01:16:24.718199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.456 [2024-07-26 01:16:24.727675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.456 [2024-07-26 01:16:24.728091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.456 [2024-07-26 01:16:24.728123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.456 [2024-07-26 01:16:24.728141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.456 [2024-07-26 01:16:24.728379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.456 [2024-07-26 01:16:24.728623] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.456 [2024-07-26 01:16:24.728648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.456 [2024-07-26 01:16:24.728663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.456 [2024-07-26 01:16:24.732255] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.456 [2024-07-26 01:16:24.741522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.456 [2024-07-26 01:16:24.741914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.456 [2024-07-26 01:16:24.741946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.456 [2024-07-26 01:16:24.741965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.456 [2024-07-26 01:16:24.742215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.456 [2024-07-26 01:16:24.742469] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.456 [2024-07-26 01:16:24.742495] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.456 [2024-07-26 01:16:24.742512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.456 [2024-07-26 01:16:24.746090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.456 [2024-07-26 01:16:24.755359] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.456 [2024-07-26 01:16:24.755759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.456 [2024-07-26 01:16:24.755792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.456 [2024-07-26 01:16:24.755810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.456 [2024-07-26 01:16:24.756048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.456 [2024-07-26 01:16:24.756312] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.456 [2024-07-26 01:16:24.756339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.456 [2024-07-26 01:16:24.756356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.456 [2024-07-26 01:16:24.759929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.456 [2024-07-26 01:16:24.769207] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.456 [2024-07-26 01:16:24.769599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.456 [2024-07-26 01:16:24.769631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.456 [2024-07-26 01:16:24.769650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.456 [2024-07-26 01:16:24.769888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.456 [2024-07-26 01:16:24.770142] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.456 [2024-07-26 01:16:24.770168] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.456 [2024-07-26 01:16:24.770185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.456 [2024-07-26 01:16:24.773757] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.456 [2024-07-26 01:16:24.783248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.456 [2024-07-26 01:16:24.783638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.456 [2024-07-26 01:16:24.783669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.456 [2024-07-26 01:16:24.783688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.456 [2024-07-26 01:16:24.783925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.456 [2024-07-26 01:16:24.784179] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.456 [2024-07-26 01:16:24.784205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.456 [2024-07-26 01:16:24.784221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.456 [2024-07-26 01:16:24.787792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.456 [2024-07-26 01:16:24.797286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.456 [2024-07-26 01:16:24.797673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.456 [2024-07-26 01:16:24.797706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.456 [2024-07-26 01:16:24.797725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.456 [2024-07-26 01:16:24.797964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.456 [2024-07-26 01:16:24.798219] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.456 [2024-07-26 01:16:24.798245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.456 [2024-07-26 01:16:24.798261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.456 [2024-07-26 01:16:24.801829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.456 [2024-07-26 01:16:24.811311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.456 [2024-07-26 01:16:24.811740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.456 [2024-07-26 01:16:24.811771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.456 [2024-07-26 01:16:24.811789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.456 [2024-07-26 01:16:24.812028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.456 [2024-07-26 01:16:24.812281] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.456 [2024-07-26 01:16:24.812307] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.456 [2024-07-26 01:16:24.812324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.456 [2024-07-26 01:16:24.815895] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.456 [2024-07-26 01:16:24.825172] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.456 [2024-07-26 01:16:24.825555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.456 [2024-07-26 01:16:24.825586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.456 [2024-07-26 01:16:24.825604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.456 [2024-07-26 01:16:24.825842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.456 [2024-07-26 01:16:24.826096] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.456 [2024-07-26 01:16:24.826122] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.456 [2024-07-26 01:16:24.826138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.456 [2024-07-26 01:16:24.829712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.457 [2024-07-26 01:16:24.839016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.457 [2024-07-26 01:16:24.839472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.457 [2024-07-26 01:16:24.839503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.457 [2024-07-26 01:16:24.839527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.457 [2024-07-26 01:16:24.839767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.457 [2024-07-26 01:16:24.840010] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.457 [2024-07-26 01:16:24.840035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.457 [2024-07-26 01:16:24.840051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.457 [2024-07-26 01:16:24.843635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.457 [2024-07-26 01:16:24.852906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.457 [2024-07-26 01:16:24.853343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.457 [2024-07-26 01:16:24.853374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.457 [2024-07-26 01:16:24.853392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.457 [2024-07-26 01:16:24.853630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.457 [2024-07-26 01:16:24.853873] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.457 [2024-07-26 01:16:24.853898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.457 [2024-07-26 01:16:24.853914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.457 [2024-07-26 01:16:24.857495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.457 [2024-07-26 01:16:24.866769] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.457 [2024-07-26 01:16:24.867195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.457 [2024-07-26 01:16:24.867226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.457 [2024-07-26 01:16:24.867245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.457 [2024-07-26 01:16:24.867484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.457 [2024-07-26 01:16:24.867728] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.457 [2024-07-26 01:16:24.867753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.457 [2024-07-26 01:16:24.867770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.457 [2024-07-26 01:16:24.871359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.457 [2024-07-26 01:16:24.880638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.457 [2024-07-26 01:16:24.881006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.457 [2024-07-26 01:16:24.881037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.457 [2024-07-26 01:16:24.881056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.716 [2024-07-26 01:16:24.881305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.716 [2024-07-26 01:16:24.881561] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.716 [2024-07-26 01:16:24.881592] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.716 [2024-07-26 01:16:24.881610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.716 [2024-07-26 01:16:24.885191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.716 [2024-07-26 01:16:24.894482] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.716 [2024-07-26 01:16:24.894872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.716 [2024-07-26 01:16:24.894904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.717 [2024-07-26 01:16:24.894923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.717 [2024-07-26 01:16:24.895172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.717 [2024-07-26 01:16:24.895416] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.717 [2024-07-26 01:16:24.895441] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.717 [2024-07-26 01:16:24.895458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.717 [2024-07-26 01:16:24.899032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.717 [2024-07-26 01:16:24.908524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.717 [2024-07-26 01:16:24.908934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.717 [2024-07-26 01:16:24.908966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.717 [2024-07-26 01:16:24.908984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.717 [2024-07-26 01:16:24.909232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.717 [2024-07-26 01:16:24.909475] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.717 [2024-07-26 01:16:24.909501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.717 [2024-07-26 01:16:24.909517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.717 [2024-07-26 01:16:24.913092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.717 [2024-07-26 01:16:24.922359] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.717 [2024-07-26 01:16:24.922766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.717 [2024-07-26 01:16:24.922798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.717 [2024-07-26 01:16:24.922817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.717 [2024-07-26 01:16:24.923055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.717 [2024-07-26 01:16:24.923310] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.717 [2024-07-26 01:16:24.923336] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.717 [2024-07-26 01:16:24.923351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.717 [2024-07-26 01:16:24.926921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.717 [2024-07-26 01:16:24.936214] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.717 [2024-07-26 01:16:24.936641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.717 [2024-07-26 01:16:24.936673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.717 [2024-07-26 01:16:24.936692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.717 [2024-07-26 01:16:24.936931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.717 [2024-07-26 01:16:24.937184] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.717 [2024-07-26 01:16:24.937211] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.717 [2024-07-26 01:16:24.937228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.717 [2024-07-26 01:16:24.940799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.717 [2024-07-26 01:16:24.950078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.717 [2024-07-26 01:16:24.950470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.717 [2024-07-26 01:16:24.950502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.717 [2024-07-26 01:16:24.950521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.717 [2024-07-26 01:16:24.950760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.717 [2024-07-26 01:16:24.951004] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.717 [2024-07-26 01:16:24.951030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.717 [2024-07-26 01:16:24.951046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.717 [2024-07-26 01:16:24.954623] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.717 [2024-07-26 01:16:24.964107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.717 [2024-07-26 01:16:24.964523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.717 [2024-07-26 01:16:24.964555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.717 [2024-07-26 01:16:24.964574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.717 [2024-07-26 01:16:24.964812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.717 [2024-07-26 01:16:24.965054] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.717 [2024-07-26 01:16:24.965090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.717 [2024-07-26 01:16:24.965105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.717 [2024-07-26 01:16:24.968675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.717 [2024-07-26 01:16:24.977950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.717 [2024-07-26 01:16:24.978373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.717 [2024-07-26 01:16:24.978405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.717 [2024-07-26 01:16:24.978422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.717 [2024-07-26 01:16:24.978666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.717 [2024-07-26 01:16:24.978910] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.717 [2024-07-26 01:16:24.978936] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.717 [2024-07-26 01:16:24.978952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.717 [2024-07-26 01:16:24.982533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.717 [2024-07-26 01:16:24.991804] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.717 [2024-07-26 01:16:24.992208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.717 [2024-07-26 01:16:24.992240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.717 [2024-07-26 01:16:24.992258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.717 [2024-07-26 01:16:24.992496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.717 [2024-07-26 01:16:24.992738] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.717 [2024-07-26 01:16:24.992764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.717 [2024-07-26 01:16:24.992781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.717 [2024-07-26 01:16:24.996361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.717 [2024-07-26 01:16:25.005835] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.717 [2024-07-26 01:16:25.006253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.717 [2024-07-26 01:16:25.006285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.717 [2024-07-26 01:16:25.006303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.717 [2024-07-26 01:16:25.006541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.717 [2024-07-26 01:16:25.006783] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.717 [2024-07-26 01:16:25.006809] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.717 [2024-07-26 01:16:25.006826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.717 [2024-07-26 01:16:25.010404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.717 [2024-07-26 01:16:25.019670] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.717 [2024-07-26 01:16:25.020074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.717 [2024-07-26 01:16:25.020107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.717 [2024-07-26 01:16:25.020125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.717 [2024-07-26 01:16:25.020363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.717 [2024-07-26 01:16:25.020606] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.717 [2024-07-26 01:16:25.020631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.717 [2024-07-26 01:16:25.020652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.717 [2024-07-26 01:16:25.024232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.717 [2024-07-26 01:16:25.033720] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.717 [2024-07-26 01:16:25.034116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.718 [2024-07-26 01:16:25.034149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.718 [2024-07-26 01:16:25.034167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.718 [2024-07-26 01:16:25.034406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.718 [2024-07-26 01:16:25.034649] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.718 [2024-07-26 01:16:25.034675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.718 [2024-07-26 01:16:25.034691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.718 [2024-07-26 01:16:25.038270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.718 [2024-07-26 01:16:25.047745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.718 [2024-07-26 01:16:25.048167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.718 [2024-07-26 01:16:25.048200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.718 [2024-07-26 01:16:25.048218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.718 [2024-07-26 01:16:25.048457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.718 [2024-07-26 01:16:25.048699] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.718 [2024-07-26 01:16:25.048725] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.718 [2024-07-26 01:16:25.048742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.718 [2024-07-26 01:16:25.052323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.718 [2024-07-26 01:16:25.061591] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.718 [2024-07-26 01:16:25.062010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.718 [2024-07-26 01:16:25.062041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.718 [2024-07-26 01:16:25.062068] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.718 [2024-07-26 01:16:25.062309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.718 [2024-07-26 01:16:25.062552] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.718 [2024-07-26 01:16:25.062577] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.718 [2024-07-26 01:16:25.062593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.718 [2024-07-26 01:16:25.066169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.718 [2024-07-26 01:16:25.075442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.718 [2024-07-26 01:16:25.075837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.718 [2024-07-26 01:16:25.075875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.718 [2024-07-26 01:16:25.075895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.718 [2024-07-26 01:16:25.076146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.718 [2024-07-26 01:16:25.076390] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.718 [2024-07-26 01:16:25.076416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.718 [2024-07-26 01:16:25.076432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.718 [2024-07-26 01:16:25.080002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.718 [2024-07-26 01:16:25.089278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.718 [2024-07-26 01:16:25.089665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.718 [2024-07-26 01:16:25.089697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.718 [2024-07-26 01:16:25.089715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.718 [2024-07-26 01:16:25.089954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.718 [2024-07-26 01:16:25.090208] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.718 [2024-07-26 01:16:25.090234] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.718 [2024-07-26 01:16:25.090251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.718 [2024-07-26 01:16:25.093821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.718 [2024-07-26 01:16:25.103312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.718 [2024-07-26 01:16:25.103725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.718 [2024-07-26 01:16:25.103757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.718 [2024-07-26 01:16:25.103776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.718 [2024-07-26 01:16:25.104015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.718 [2024-07-26 01:16:25.104269] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.718 [2024-07-26 01:16:25.104296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.718 [2024-07-26 01:16:25.104312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.718 [2024-07-26 01:16:25.107884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.718 [2024-07-26 01:16:25.117159] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.718 [2024-07-26 01:16:25.117574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.718 [2024-07-26 01:16:25.117606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.718 [2024-07-26 01:16:25.117624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.718 [2024-07-26 01:16:25.117862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.718 [2024-07-26 01:16:25.118122] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.718 [2024-07-26 01:16:25.118149] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.718 [2024-07-26 01:16:25.118165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.718 [2024-07-26 01:16:25.121737] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.718 [2024-07-26 01:16:25.131005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.718 [2024-07-26 01:16:25.131424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.718 [2024-07-26 01:16:25.131455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.718 [2024-07-26 01:16:25.131473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.718 [2024-07-26 01:16:25.131712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.718 [2024-07-26 01:16:25.131955] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.718 [2024-07-26 01:16:25.131980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.718 [2024-07-26 01:16:25.131996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.718 [2024-07-26 01:16:25.135588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.978 [2024-07-26 01:16:25.144857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.978 [2024-07-26 01:16:25.145262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.978 [2024-07-26 01:16:25.145294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.978 [2024-07-26 01:16:25.145313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.978 [2024-07-26 01:16:25.145553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.978 [2024-07-26 01:16:25.145794] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.978 [2024-07-26 01:16:25.145818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.978 [2024-07-26 01:16:25.145834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.978 [2024-07-26 01:16:25.149419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.978 [2024-07-26 01:16:25.158695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.978 [2024-07-26 01:16:25.159117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.978 [2024-07-26 01:16:25.159149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.978 [2024-07-26 01:16:25.159168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.978 [2024-07-26 01:16:25.159408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.978 [2024-07-26 01:16:25.159652] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.978 [2024-07-26 01:16:25.159678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.978 [2024-07-26 01:16:25.159694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.978 [2024-07-26 01:16:25.163278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.978 [2024-07-26 01:16:25.172569] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.978 [2024-07-26 01:16:25.172933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.978 [2024-07-26 01:16:25.172966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.978 [2024-07-26 01:16:25.172985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.978 [2024-07-26 01:16:25.173480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.978 [2024-07-26 01:16:25.173726] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.978 [2024-07-26 01:16:25.173751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.978 [2024-07-26 01:16:25.173768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.978 [2024-07-26 01:16:25.177358] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.978 [2024-07-26 01:16:25.186434] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.978 [2024-07-26 01:16:25.186848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.978 [2024-07-26 01:16:25.186880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.978 [2024-07-26 01:16:25.186898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.978 [2024-07-26 01:16:25.187149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.978 [2024-07-26 01:16:25.187393] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.978 [2024-07-26 01:16:25.187418] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.978 [2024-07-26 01:16:25.187435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.978 [2024-07-26 01:16:25.191008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.978 [2024-07-26 01:16:25.200293] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.978 [2024-07-26 01:16:25.200683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.978 [2024-07-26 01:16:25.200715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.978 [2024-07-26 01:16:25.200733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.978 [2024-07-26 01:16:25.200971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.978 [2024-07-26 01:16:25.201225] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.978 [2024-07-26 01:16:25.201251] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.978 [2024-07-26 01:16:25.201268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.978 [2024-07-26 01:16:25.204843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.978 [2024-07-26 01:16:25.214332] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.978 [2024-07-26 01:16:25.214749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.978 [2024-07-26 01:16:25.214781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.978 [2024-07-26 01:16:25.214805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.978 [2024-07-26 01:16:25.215044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.978 [2024-07-26 01:16:25.215299] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.978 [2024-07-26 01:16:25.215325] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.978 [2024-07-26 01:16:25.215341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.978 [2024-07-26 01:16:25.218911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.978 [2024-07-26 01:16:25.228286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.978 [2024-07-26 01:16:25.228685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.978 [2024-07-26 01:16:25.228717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.978 [2024-07-26 01:16:25.228735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.978 [2024-07-26 01:16:25.228974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.978 [2024-07-26 01:16:25.229228] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.978 [2024-07-26 01:16:25.229254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.978 [2024-07-26 01:16:25.229271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.978 [2024-07-26 01:16:25.232844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.978 [2024-07-26 01:16:25.242128] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.978 [2024-07-26 01:16:25.242552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.978 [2024-07-26 01:16:25.242584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.978 [2024-07-26 01:16:25.242602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.978 [2024-07-26 01:16:25.242840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.978 [2024-07-26 01:16:25.243092] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.978 [2024-07-26 01:16:25.243118] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.978 [2024-07-26 01:16:25.243135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.978 [2024-07-26 01:16:25.246706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.978 [2024-07-26 01:16:25.255972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.978 [2024-07-26 01:16:25.256393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.979 [2024-07-26 01:16:25.256424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.979 [2024-07-26 01:16:25.256443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.979 [2024-07-26 01:16:25.256681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.979 [2024-07-26 01:16:25.256923] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.979 [2024-07-26 01:16:25.256954] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.979 [2024-07-26 01:16:25.256970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.979 [2024-07-26 01:16:25.260550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.979 [2024-07-26 01:16:25.269820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.979 [2024-07-26 01:16:25.270222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.979 [2024-07-26 01:16:25.270254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.979 [2024-07-26 01:16:25.270273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.979 [2024-07-26 01:16:25.270513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.979 [2024-07-26 01:16:25.270757] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.979 [2024-07-26 01:16:25.270782] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.979 [2024-07-26 01:16:25.270799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.979 [2024-07-26 01:16:25.274386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.979 [2024-07-26 01:16:25.283654] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.979 [2024-07-26 01:16:25.284039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.979 [2024-07-26 01:16:25.284078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.979 [2024-07-26 01:16:25.284097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.979 [2024-07-26 01:16:25.284336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.979 [2024-07-26 01:16:25.284580] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.979 [2024-07-26 01:16:25.284605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.979 [2024-07-26 01:16:25.284621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.979 [2024-07-26 01:16:25.288202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.979 [2024-07-26 01:16:25.297680] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.979 [2024-07-26 01:16:25.298099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.979 [2024-07-26 01:16:25.298132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.979 [2024-07-26 01:16:25.298151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.979 [2024-07-26 01:16:25.298389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.979 [2024-07-26 01:16:25.298632] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.979 [2024-07-26 01:16:25.298657] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.979 [2024-07-26 01:16:25.298673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.979 [2024-07-26 01:16:25.302258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.979 [2024-07-26 01:16:25.311542] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.979 [2024-07-26 01:16:25.311962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.979 [2024-07-26 01:16:25.311993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.979 [2024-07-26 01:16:25.312011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.979 [2024-07-26 01:16:25.312260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.979 [2024-07-26 01:16:25.312504] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.979 [2024-07-26 01:16:25.312531] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.979 [2024-07-26 01:16:25.312547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.979 [2024-07-26 01:16:25.316127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.979 [2024-07-26 01:16:25.325413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.979 [2024-07-26 01:16:25.325799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.979 [2024-07-26 01:16:25.325831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.979 [2024-07-26 01:16:25.325849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.979 [2024-07-26 01:16:25.326096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.979 [2024-07-26 01:16:25.326349] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.979 [2024-07-26 01:16:25.326376] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.979 [2024-07-26 01:16:25.326393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.979 [2024-07-26 01:16:25.329971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.979 [2024-07-26 01:16:25.339298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.979 [2024-07-26 01:16:25.339682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.979 [2024-07-26 01:16:25.339714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.979 [2024-07-26 01:16:25.339732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.979 [2024-07-26 01:16:25.339970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.979 [2024-07-26 01:16:25.340224] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.979 [2024-07-26 01:16:25.340249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.979 [2024-07-26 01:16:25.340265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.979 [2024-07-26 01:16:25.343837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.979 [2024-07-26 01:16:25.353336] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.979 [2024-07-26 01:16:25.353720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.979 [2024-07-26 01:16:25.353752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.979 [2024-07-26 01:16:25.353770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.979 [2024-07-26 01:16:25.354014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.979 [2024-07-26 01:16:25.354269] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.979 [2024-07-26 01:16:25.354294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.979 [2024-07-26 01:16:25.354310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.979 [2024-07-26 01:16:25.357911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.979 [2024-07-26 01:16:25.367205] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.979 [2024-07-26 01:16:25.367589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.979 [2024-07-26 01:16:25.367621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.979 [2024-07-26 01:16:25.367639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.979 [2024-07-26 01:16:25.367877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.979 [2024-07-26 01:16:25.368131] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.979 [2024-07-26 01:16:25.368156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.979 [2024-07-26 01:16:25.368172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.979 [2024-07-26 01:16:25.371746] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.979 [2024-07-26 01:16:25.381252] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.979 [2024-07-26 01:16:25.381665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.979 [2024-07-26 01:16:25.381697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.979 [2024-07-26 01:16:25.381715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.979 [2024-07-26 01:16:25.381953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.979 [2024-07-26 01:16:25.382206] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.979 [2024-07-26 01:16:25.382232] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.979 [2024-07-26 01:16:25.382248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.979 [2024-07-26 01:16:25.385824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:54.979 [2024-07-26 01:16:25.395113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:54.980 [2024-07-26 01:16:25.395501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.980 [2024-07-26 01:16:25.395532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:54.980 [2024-07-26 01:16:25.395551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:54.980 [2024-07-26 01:16:25.395789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:54.980 [2024-07-26 01:16:25.396032] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:54.980 [2024-07-26 01:16:25.396056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:54.980 [2024-07-26 01:16:25.396089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:54.980 [2024-07-26 01:16:25.399663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.239 [2024-07-26 01:16:25.408946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.239 [2024-07-26 01:16:25.409349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-07-26 01:16:25.409380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.239 [2024-07-26 01:16:25.409399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.239 [2024-07-26 01:16:25.409637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.239 [2024-07-26 01:16:25.409880] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.239 [2024-07-26 01:16:25.409904] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.239 [2024-07-26 01:16:25.409920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.239 [2024-07-26 01:16:25.413502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.239 [2024-07-26 01:16:25.422995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.239 [2024-07-26 01:16:25.423390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-07-26 01:16:25.423422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.239 [2024-07-26 01:16:25.423440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.239 [2024-07-26 01:16:25.423679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.239 [2024-07-26 01:16:25.423923] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.239 [2024-07-26 01:16:25.423947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.239 [2024-07-26 01:16:25.423963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.239 [2024-07-26 01:16:25.427548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.239 [2024-07-26 01:16:25.436864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.239 [2024-07-26 01:16:25.437268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-07-26 01:16:25.437299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.239 [2024-07-26 01:16:25.437317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.239 [2024-07-26 01:16:25.437557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.239 [2024-07-26 01:16:25.437800] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.239 [2024-07-26 01:16:25.437825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.239 [2024-07-26 01:16:25.437841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.239 [2024-07-26 01:16:25.441421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.239 [2024-07-26 01:16:25.450705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.239 [2024-07-26 01:16:25.451122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-07-26 01:16:25.451161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.239 [2024-07-26 01:16:25.451180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.239 [2024-07-26 01:16:25.451421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.239 [2024-07-26 01:16:25.451663] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.239 [2024-07-26 01:16:25.451688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.239 [2024-07-26 01:16:25.451704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.239 [2024-07-26 01:16:25.455286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.239 [2024-07-26 01:16:25.464564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.239 [2024-07-26 01:16:25.464980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-07-26 01:16:25.465011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.239 [2024-07-26 01:16:25.465029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.239 [2024-07-26 01:16:25.465283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.239 [2024-07-26 01:16:25.465526] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.239 [2024-07-26 01:16:25.465551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.239 [2024-07-26 01:16:25.465566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.239 [2024-07-26 01:16:25.469145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.239 [2024-07-26 01:16:25.478434] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.239 [2024-07-26 01:16:25.478843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.239 [2024-07-26 01:16:25.478876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.239 [2024-07-26 01:16:25.478894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.239 [2024-07-26 01:16:25.479143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.239 [2024-07-26 01:16:25.479387] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.239 [2024-07-26 01:16:25.479413] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.239 [2024-07-26 01:16:25.479429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.240 [2024-07-26 01:16:25.483001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.240 [2024-07-26 01:16:25.492296] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.240 [2024-07-26 01:16:25.492707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-07-26 01:16:25.492739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.240 [2024-07-26 01:16:25.492757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.240 [2024-07-26 01:16:25.492996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.240 [2024-07-26 01:16:25.493256] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.240 [2024-07-26 01:16:25.493283] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.240 [2024-07-26 01:16:25.493300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.240 [2024-07-26 01:16:25.496872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.240 [2024-07-26 01:16:25.506152] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.240 [2024-07-26 01:16:25.506569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-07-26 01:16:25.506601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.240 [2024-07-26 01:16:25.506619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.240 [2024-07-26 01:16:25.506857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.240 [2024-07-26 01:16:25.507111] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.240 [2024-07-26 01:16:25.507137] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.240 [2024-07-26 01:16:25.507152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.240 [2024-07-26 01:16:25.510721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.240 [2024-07-26 01:16:25.519996] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.240 [2024-07-26 01:16:25.520418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-07-26 01:16:25.520450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.240 [2024-07-26 01:16:25.520468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.240 [2024-07-26 01:16:25.520706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.240 [2024-07-26 01:16:25.520948] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.240 [2024-07-26 01:16:25.520974] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.240 [2024-07-26 01:16:25.520990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.240 [2024-07-26 01:16:25.524575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.240 [2024-07-26 01:16:25.533852] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.240 [2024-07-26 01:16:25.534281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-07-26 01:16:25.534313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.240 [2024-07-26 01:16:25.534331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.240 [2024-07-26 01:16:25.534569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.240 [2024-07-26 01:16:25.534822] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.240 [2024-07-26 01:16:25.534849] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.240 [2024-07-26 01:16:25.534865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.240 [2024-07-26 01:16:25.538449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.240 [2024-07-26 01:16:25.547758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.240 [2024-07-26 01:16:25.548186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-07-26 01:16:25.548220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.240 [2024-07-26 01:16:25.548238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.240 [2024-07-26 01:16:25.548477] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.240 [2024-07-26 01:16:25.548720] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.240 [2024-07-26 01:16:25.548746] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.240 [2024-07-26 01:16:25.548763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.240 [2024-07-26 01:16:25.552350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.240 [2024-07-26 01:16:25.561629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.240 [2024-07-26 01:16:25.562067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-07-26 01:16:25.562100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.240 [2024-07-26 01:16:25.562119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.240 [2024-07-26 01:16:25.562358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.240 [2024-07-26 01:16:25.562602] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.240 [2024-07-26 01:16:25.562628] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.240 [2024-07-26 01:16:25.562644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.240 [2024-07-26 01:16:25.566225] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.240 [2024-07-26 01:16:25.575502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.240 [2024-07-26 01:16:25.575890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-07-26 01:16:25.575922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.240 [2024-07-26 01:16:25.575940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.240 [2024-07-26 01:16:25.576193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.240 [2024-07-26 01:16:25.576436] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.240 [2024-07-26 01:16:25.576462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.240 [2024-07-26 01:16:25.576478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.240 [2024-07-26 01:16:25.580057] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.240 [2024-07-26 01:16:25.589544] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.240 [2024-07-26 01:16:25.589956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-07-26 01:16:25.589987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.240 [2024-07-26 01:16:25.590011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.240 [2024-07-26 01:16:25.590262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.240 [2024-07-26 01:16:25.590507] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.240 [2024-07-26 01:16:25.590533] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.240 [2024-07-26 01:16:25.590549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.240 [2024-07-26 01:16:25.594142] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.240 [2024-07-26 01:16:25.603425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.240 [2024-07-26 01:16:25.603836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-07-26 01:16:25.603867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.240 [2024-07-26 01:16:25.603885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.240 [2024-07-26 01:16:25.604134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.240 [2024-07-26 01:16:25.604379] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.240 [2024-07-26 01:16:25.604404] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.240 [2024-07-26 01:16:25.604420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.240 [2024-07-26 01:16:25.607995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.240 [2024-07-26 01:16:25.617317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.240 [2024-07-26 01:16:25.617731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.240 [2024-07-26 01:16:25.617762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.241 [2024-07-26 01:16:25.617781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.241 [2024-07-26 01:16:25.618019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.241 [2024-07-26 01:16:25.618273] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.241 [2024-07-26 01:16:25.618299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.241 [2024-07-26 01:16:25.618316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.241 [2024-07-26 01:16:25.621885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.241 [2024-07-26 01:16:25.631177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.241 [2024-07-26 01:16:25.631599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-07-26 01:16:25.631630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.241 [2024-07-26 01:16:25.631649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.241 [2024-07-26 01:16:25.631888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.241 [2024-07-26 01:16:25.632144] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.241 [2024-07-26 01:16:25.632174] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.241 [2024-07-26 01:16:25.632192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.241 [2024-07-26 01:16:25.635780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.241 [2024-07-26 01:16:25.645083] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.241 [2024-07-26 01:16:25.645510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-07-26 01:16:25.645542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.241 [2024-07-26 01:16:25.645560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.241 [2024-07-26 01:16:25.645798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.241 [2024-07-26 01:16:25.646041] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.241 [2024-07-26 01:16:25.646078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.241 [2024-07-26 01:16:25.646095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.241 [2024-07-26 01:16:25.649673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.241 [2024-07-26 01:16:25.658968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.241 [2024-07-26 01:16:25.659372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.241 [2024-07-26 01:16:25.659405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.241 [2024-07-26 01:16:25.659423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.241 [2024-07-26 01:16:25.659661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.241 [2024-07-26 01:16:25.659903] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.241 [2024-07-26 01:16:25.659929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.241 [2024-07-26 01:16:25.659946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.241 [2024-07-26 01:16:25.663527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.500 [2024-07-26 01:16:25.672807] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.500 [2024-07-26 01:16:25.673228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.500 [2024-07-26 01:16:25.673260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.501 [2024-07-26 01:16:25.673279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.501 [2024-07-26 01:16:25.673518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.501 [2024-07-26 01:16:25.673762] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.501 [2024-07-26 01:16:25.673787] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.501 [2024-07-26 01:16:25.673804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.501 [2024-07-26 01:16:25.677395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.501 [2024-07-26 01:16:25.686685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.501 [2024-07-26 01:16:25.687113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.501 [2024-07-26 01:16:25.687147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.501 [2024-07-26 01:16:25.687165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.501 [2024-07-26 01:16:25.687405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.501 [2024-07-26 01:16:25.687655] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.501 [2024-07-26 01:16:25.687680] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.501 [2024-07-26 01:16:25.687698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.501 [2024-07-26 01:16:25.691290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.501 [2024-07-26 01:16:25.700580] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.501 [2024-07-26 01:16:25.700974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.501 [2024-07-26 01:16:25.701007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.501 [2024-07-26 01:16:25.701025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.501 [2024-07-26 01:16:25.701276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.501 [2024-07-26 01:16:25.701520] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.501 [2024-07-26 01:16:25.701544] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.501 [2024-07-26 01:16:25.701561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.501 [2024-07-26 01:16:25.705143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.501 [2024-07-26 01:16:25.714427] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.501 [2024-07-26 01:16:25.714843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.501 [2024-07-26 01:16:25.714875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.501 [2024-07-26 01:16:25.714893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.501 [2024-07-26 01:16:25.715142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.501 [2024-07-26 01:16:25.715386] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.501 [2024-07-26 01:16:25.715411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.501 [2024-07-26 01:16:25.715427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.501 [2024-07-26 01:16:25.718996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.501 [2024-07-26 01:16:25.728274] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.501 [2024-07-26 01:16:25.728684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.501 [2024-07-26 01:16:25.728716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.501 [2024-07-26 01:16:25.728734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.501 [2024-07-26 01:16:25.728978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.501 [2024-07-26 01:16:25.729231] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.501 [2024-07-26 01:16:25.729257] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.501 [2024-07-26 01:16:25.729273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.501 [2024-07-26 01:16:25.732840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.501 [2024-07-26 01:16:25.742130] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.501 [2024-07-26 01:16:25.742554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.501 [2024-07-26 01:16:25.742586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.501 [2024-07-26 01:16:25.742604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.501 [2024-07-26 01:16:25.742842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.501 [2024-07-26 01:16:25.743094] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.501 [2024-07-26 01:16:25.743125] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.501 [2024-07-26 01:16:25.743141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.501 [2024-07-26 01:16:25.746713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.501 [2024-07-26 01:16:25.756000] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.501 [2024-07-26 01:16:25.756419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.501 [2024-07-26 01:16:25.756451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.501 [2024-07-26 01:16:25.756469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.501 [2024-07-26 01:16:25.756707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.501 [2024-07-26 01:16:25.756950] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.501 [2024-07-26 01:16:25.756975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.501 [2024-07-26 01:16:25.756991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.501 [2024-07-26 01:16:25.760596] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.501 [2024-07-26 01:16:25.769888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.501 [2024-07-26 01:16:25.770286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.501 [2024-07-26 01:16:25.770330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.501 [2024-07-26 01:16:25.770349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.501 [2024-07-26 01:16:25.770588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.501 [2024-07-26 01:16:25.770832] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.501 [2024-07-26 01:16:25.770858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.501 [2024-07-26 01:16:25.770880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.501 [2024-07-26 01:16:25.774475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.501 [2024-07-26 01:16:25.783763] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.501 [2024-07-26 01:16:25.784167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.501 [2024-07-26 01:16:25.784199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.501 [2024-07-26 01:16:25.784218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.501 [2024-07-26 01:16:25.784456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.501 [2024-07-26 01:16:25.784701] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.501 [2024-07-26 01:16:25.784727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.501 [2024-07-26 01:16:25.784743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.501 [2024-07-26 01:16:25.788330] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.501 [2024-07-26 01:16:25.797615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.501 [2024-07-26 01:16:25.798028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.501 [2024-07-26 01:16:25.798069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.501 [2024-07-26 01:16:25.798090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.501 [2024-07-26 01:16:25.798329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.502 [2024-07-26 01:16:25.798572] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.502 [2024-07-26 01:16:25.798598] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.502 [2024-07-26 01:16:25.798614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.502 [2024-07-26 01:16:25.802205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.502 [2024-07-26 01:16:25.811486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.502 [2024-07-26 01:16:25.811903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.502 [2024-07-26 01:16:25.811935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.502 [2024-07-26 01:16:25.811953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.502 [2024-07-26 01:16:25.812206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.502 [2024-07-26 01:16:25.812448] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.502 [2024-07-26 01:16:25.812474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.502 [2024-07-26 01:16:25.812490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.502 [2024-07-26 01:16:25.816076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.502 [2024-07-26 01:16:25.825364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.502 [2024-07-26 01:16:25.825778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.502 [2024-07-26 01:16:25.825815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.502 [2024-07-26 01:16:25.825834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.502 [2024-07-26 01:16:25.826086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.502 [2024-07-26 01:16:25.826330] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.502 [2024-07-26 01:16:25.826356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.502 [2024-07-26 01:16:25.826372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.502 [2024-07-26 01:16:25.829943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.502 [2024-07-26 01:16:25.839250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.502 [2024-07-26 01:16:25.839673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.502 [2024-07-26 01:16:25.839705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.502 [2024-07-26 01:16:25.839724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.502 [2024-07-26 01:16:25.839963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.502 [2024-07-26 01:16:25.840222] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.502 [2024-07-26 01:16:25.840248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.502 [2024-07-26 01:16:25.840265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.502 [2024-07-26 01:16:25.843837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.502 [2024-07-26 01:16:25.853135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.502 [2024-07-26 01:16:25.853551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.502 [2024-07-26 01:16:25.853582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.502 [2024-07-26 01:16:25.853600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.502 [2024-07-26 01:16:25.853839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.502 [2024-07-26 01:16:25.854094] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.502 [2024-07-26 01:16:25.854120] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.502 [2024-07-26 01:16:25.854136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.502 [2024-07-26 01:16:25.857709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.502 [2024-07-26 01:16:25.866992] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.502 [2024-07-26 01:16:25.867467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.502 [2024-07-26 01:16:25.867499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.502 [2024-07-26 01:16:25.867517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.502 [2024-07-26 01:16:25.867755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.502 [2024-07-26 01:16:25.868004] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.502 [2024-07-26 01:16:25.868030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.502 [2024-07-26 01:16:25.868046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.502 [2024-07-26 01:16:25.871632] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.502 [2024-07-26 01:16:25.880935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.502 [2024-07-26 01:16:25.881353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.502 [2024-07-26 01:16:25.881383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.502 [2024-07-26 01:16:25.881401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.502 [2024-07-26 01:16:25.881640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.502 [2024-07-26 01:16:25.881881] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.502 [2024-07-26 01:16:25.881905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.502 [2024-07-26 01:16:25.881921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.502 [2024-07-26 01:16:25.885513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.502 [2024-07-26 01:16:25.894808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.502 [2024-07-26 01:16:25.895237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.502 [2024-07-26 01:16:25.895268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.502 [2024-07-26 01:16:25.895286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.502 [2024-07-26 01:16:25.895525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.502 [2024-07-26 01:16:25.895767] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.502 [2024-07-26 01:16:25.895793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.502 [2024-07-26 01:16:25.895809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.502 [2024-07-26 01:16:25.899394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.502 [2024-07-26 01:16:25.908681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.502 [2024-07-26 01:16:25.909092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.502 [2024-07-26 01:16:25.909125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.502 [2024-07-26 01:16:25.909143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.502 [2024-07-26 01:16:25.909381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.502 [2024-07-26 01:16:25.909624] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.502 [2024-07-26 01:16:25.909650] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.502 [2024-07-26 01:16:25.909666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.502 [2024-07-26 01:16:25.913260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.502 [2024-07-26 01:16:25.922555] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.502 [2024-07-26 01:16:25.922999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.502 [2024-07-26 01:16:25.923031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.502 [2024-07-26 01:16:25.923049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.502 [2024-07-26 01:16:25.923296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.502 [2024-07-26 01:16:25.923539] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.502 [2024-07-26 01:16:25.923564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.502 [2024-07-26 01:16:25.923581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.762 [2024-07-26 01:16:25.927163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.762 [2024-07-26 01:16:25.936460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.763 [2024-07-26 01:16:25.936882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.763 [2024-07-26 01:16:25.936914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.763 [2024-07-26 01:16:25.936933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.763 [2024-07-26 01:16:25.937181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.763 [2024-07-26 01:16:25.937425] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.763 [2024-07-26 01:16:25.937451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.763 [2024-07-26 01:16:25.937468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.763 [2024-07-26 01:16:25.941041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.763 [2024-07-26 01:16:25.950338] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.763 [2024-07-26 01:16:25.950825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.763 [2024-07-26 01:16:25.950875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.763 [2024-07-26 01:16:25.950894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.763 [2024-07-26 01:16:25.951149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.763 [2024-07-26 01:16:25.951394] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.763 [2024-07-26 01:16:25.951420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.763 [2024-07-26 01:16:25.951436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.763 [2024-07-26 01:16:25.955012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.763 [2024-07-26 01:16:25.964297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.763 [2024-07-26 01:16:25.964710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.763 [2024-07-26 01:16:25.964742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.763 [2024-07-26 01:16:25.964766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.763 [2024-07-26 01:16:25.965005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.763 [2024-07-26 01:16:25.965262] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.763 [2024-07-26 01:16:25.965289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.763 [2024-07-26 01:16:25.965305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.763 [2024-07-26 01:16:25.968877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.763 [2024-07-26 01:16:25.978164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.763 [2024-07-26 01:16:25.978580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.763 [2024-07-26 01:16:25.978612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.763 [2024-07-26 01:16:25.978630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.763 [2024-07-26 01:16:25.978868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.763 [2024-07-26 01:16:25.979124] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.763 [2024-07-26 01:16:25.979150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.763 [2024-07-26 01:16:25.979168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.763 [2024-07-26 01:16:25.982740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.763 [2024-07-26 01:16:25.992015] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.763 [2024-07-26 01:16:25.992428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.763 [2024-07-26 01:16:25.992460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.763 [2024-07-26 01:16:25.992478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.763 [2024-07-26 01:16:25.992716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.763 [2024-07-26 01:16:25.992958] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.763 [2024-07-26 01:16:25.992984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.763 [2024-07-26 01:16:25.993000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.763 [2024-07-26 01:16:25.996588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.763 [2024-07-26 01:16:26.005870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.763 [2024-07-26 01:16:26.006288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.763 [2024-07-26 01:16:26.006321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.763 [2024-07-26 01:16:26.006339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.763 [2024-07-26 01:16:26.006577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.763 [2024-07-26 01:16:26.006820] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.763 [2024-07-26 01:16:26.006851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.763 [2024-07-26 01:16:26.006868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.763 [2024-07-26 01:16:26.010454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.763 [2024-07-26 01:16:26.019733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.763 [2024-07-26 01:16:26.020123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.763 [2024-07-26 01:16:26.020156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.763 [2024-07-26 01:16:26.020175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.763 [2024-07-26 01:16:26.020414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.763 [2024-07-26 01:16:26.020659] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.763 [2024-07-26 01:16:26.020685] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.763 [2024-07-26 01:16:26.020701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.763 [2024-07-26 01:16:26.024283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.763 [2024-07-26 01:16:26.033770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.763 [2024-07-26 01:16:26.034194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.763 [2024-07-26 01:16:26.034227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.763 [2024-07-26 01:16:26.034246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.763 [2024-07-26 01:16:26.034485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.763 [2024-07-26 01:16:26.034729] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.763 [2024-07-26 01:16:26.034755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.763 [2024-07-26 01:16:26.034772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.763 [2024-07-26 01:16:26.038371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.763 [2024-07-26 01:16:26.047651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.763 [2024-07-26 01:16:26.048076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.763 [2024-07-26 01:16:26.048108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.763 [2024-07-26 01:16:26.048126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.763 [2024-07-26 01:16:26.048365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.763 [2024-07-26 01:16:26.048607] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.763 [2024-07-26 01:16:26.048633] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.763 [2024-07-26 01:16:26.048648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.763 [2024-07-26 01:16:26.052226] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.763 [2024-07-26 01:16:26.061517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.763 [2024-07-26 01:16:26.061946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.763 [2024-07-26 01:16:26.061978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.763 [2024-07-26 01:16:26.061997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.763 [2024-07-26 01:16:26.062249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.763 [2024-07-26 01:16:26.062493] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.764 [2024-07-26 01:16:26.062519] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.764 [2024-07-26 01:16:26.062536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.764 [2024-07-26 01:16:26.066115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.764 [2024-07-26 01:16:26.075395] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.764 [2024-07-26 01:16:26.075817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.764 [2024-07-26 01:16:26.075850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.764 [2024-07-26 01:16:26.075868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.764 [2024-07-26 01:16:26.076121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.764 [2024-07-26 01:16:26.076364] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.764 [2024-07-26 01:16:26.076390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.764 [2024-07-26 01:16:26.076406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.764 [2024-07-26 01:16:26.079978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.764 [2024-07-26 01:16:26.089266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.764 [2024-07-26 01:16:26.089658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.764 [2024-07-26 01:16:26.089690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.764 [2024-07-26 01:16:26.089708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.764 [2024-07-26 01:16:26.089946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.764 [2024-07-26 01:16:26.090200] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.764 [2024-07-26 01:16:26.090227] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.764 [2024-07-26 01:16:26.090243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.764 [2024-07-26 01:16:26.093820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.764 [2024-07-26 01:16:26.103112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.764 [2024-07-26 01:16:26.103498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.764 [2024-07-26 01:16:26.103530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.764 [2024-07-26 01:16:26.103548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.764 [2024-07-26 01:16:26.103793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.764 [2024-07-26 01:16:26.104035] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.764 [2024-07-26 01:16:26.104073] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.764 [2024-07-26 01:16:26.104091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.764 [2024-07-26 01:16:26.107665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.764 [2024-07-26 01:16:26.116945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.764 [2024-07-26 01:16:26.117375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.764 [2024-07-26 01:16:26.117408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.764 [2024-07-26 01:16:26.117427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.764 [2024-07-26 01:16:26.117666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.764 [2024-07-26 01:16:26.117910] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.764 [2024-07-26 01:16:26.117935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.764 [2024-07-26 01:16:26.117952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.764 [2024-07-26 01:16:26.121538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.764 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1979887 Killed "${NVMF_APP[@]}" "$@" 00:33:55.764 01:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:33:55.764 01:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:55.764 01:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:55.764 01:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:55.764 01:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:55.764 [2024-07-26 01:16:26.130824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.764 [2024-07-26 01:16:26.131241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.764 [2024-07-26 01:16:26.131273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.764 [2024-07-26 01:16:26.131292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.764 [2024-07-26 01:16:26.131532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.764 [2024-07-26 01:16:26.131776] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.764 [2024-07-26 01:16:26.131802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.764 [2024-07-26 01:16:26.131818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.764 01:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1980836 00:33:55.764 01:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:55.764 01:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1980836 00:33:55.764 01:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1980836 ']' 00:33:55.764 01:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:55.764 01:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:55.764 01:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:55.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:55.764 01:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:55.764 01:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:55.764 [2024-07-26 01:16:26.135404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.764 [2024-07-26 01:16:26.144704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.764 [2024-07-26 01:16:26.145114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.764 [2024-07-26 01:16:26.145146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.764 [2024-07-26 01:16:26.145165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.764 [2024-07-26 01:16:26.145404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.764 [2024-07-26 01:16:26.145646] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.764 [2024-07-26 01:16:26.145670] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.764 [2024-07-26 01:16:26.145686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.764 [2024-07-26 01:16:26.149272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.764 [2024-07-26 01:16:26.158564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.764 [2024-07-26 01:16:26.158977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.764 [2024-07-26 01:16:26.159009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.764 [2024-07-26 01:16:26.159027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.764 [2024-07-26 01:16:26.159276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.764 [2024-07-26 01:16:26.159519] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.765 [2024-07-26 01:16:26.159544] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.765 [2024-07-26 01:16:26.159560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.765 [2024-07-26 01:16:26.163145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.765 [2024-07-26 01:16:26.171913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.765 [2024-07-26 01:16:26.172346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.765 [2024-07-26 01:16:26.172390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.765 [2024-07-26 01:16:26.172406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.765 [2024-07-26 01:16:26.172652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.765 [2024-07-26 01:16:26.172847] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.765 [2024-07-26 01:16:26.172867] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.765 [2024-07-26 01:16:26.172889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.765 [2024-07-26 01:16:26.175855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.765 [2024-07-26 01:16:26.178805] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:33:55.765 [2024-07-26 01:16:26.178864] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:55.765 [2024-07-26 01:16:26.185445] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.765 [2024-07-26 01:16:26.185818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.765 [2024-07-26 01:16:26.185845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:55.765 [2024-07-26 01:16:26.185860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:55.765 [2024-07-26 01:16:26.186113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:55.765 [2024-07-26 01:16:26.186347] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.765 [2024-07-26 01:16:26.186369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.765 [2024-07-26 01:16:26.186384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.025 [2024-07-26 01:16:26.189747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.025 [2024-07-26 01:16:26.198825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.025 [2024-07-26 01:16:26.199209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.025 [2024-07-26 01:16:26.199237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:56.025 [2024-07-26 01:16:26.199254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:56.025 [2024-07-26 01:16:26.199506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:56.025 [2024-07-26 01:16:26.199700] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.025 [2024-07-26 01:16:26.199719] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.025 [2024-07-26 01:16:26.199732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.025 [2024-07-26 01:16:26.202726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.025 [2024-07-26 01:16:26.212211] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.025 [2024-07-26 01:16:26.212606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.025 [2024-07-26 01:16:26.212634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:56.026 [2024-07-26 01:16:26.212650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:56.026 [2024-07-26 01:16:26.212888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:56.026 [2024-07-26 01:16:26.213124] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.026 [2024-07-26 01:16:26.213145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.026 [2024-07-26 01:16:26.213164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.026 EAL: No free 2048 kB hugepages reported on node 1 00:33:56.026 [2024-07-26 01:16:26.216157] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.026 [2024-07-26 01:16:26.226057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.026 [2024-07-26 01:16:26.226421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.026 [2024-07-26 01:16:26.226449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:56.026 [2024-07-26 01:16:26.226465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:56.026 [2024-07-26 01:16:26.226698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:56.026 [2024-07-26 01:16:26.226942] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.026 [2024-07-26 01:16:26.226966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.026 [2024-07-26 01:16:26.226982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.026 [2024-07-26 01:16:26.230558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.026 [2024-07-26 01:16:26.239935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.026 [2024-07-26 01:16:26.240347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.026 [2024-07-26 01:16:26.240390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:56.026 [2024-07-26 01:16:26.240407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:56.026 [2024-07-26 01:16:26.240642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:56.026 [2024-07-26 01:16:26.240886] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.026 [2024-07-26 01:16:26.240910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.026 [2024-07-26 01:16:26.240927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.026 [2024-07-26 01:16:26.244435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.026 [2024-07-26 01:16:26.248756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:56.026 [2024-07-26 01:16:26.253872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.026 [2024-07-26 01:16:26.254365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.026 [2024-07-26 01:16:26.254393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:56.026 [2024-07-26 01:16:26.254409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:56.026 [2024-07-26 01:16:26.254623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:56.026 [2024-07-26 01:16:26.254817] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.026 [2024-07-26 01:16:26.254837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.026 [2024-07-26 01:16:26.254851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.026 [2024-07-26 01:16:26.257822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.026 [2024-07-26 01:16:26.267150] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.026 [2024-07-26 01:16:26.267816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.026 [2024-07-26 01:16:26.267853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:56.026 [2024-07-26 01:16:26.267887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:56.026 [2024-07-26 01:16:26.268154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:56.026 [2024-07-26 01:16:26.268378] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.026 [2024-07-26 01:16:26.268399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.026 [2024-07-26 01:16:26.268430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.026 [2024-07-26 01:16:26.271398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.026 [2024-07-26 01:16:26.280530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.026 [2024-07-26 01:16:26.280981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.026 [2024-07-26 01:16:26.281010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:56.026 [2024-07-26 01:16:26.281027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:56.026 [2024-07-26 01:16:26.281288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:56.026 [2024-07-26 01:16:26.281518] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.026 [2024-07-26 01:16:26.281539] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.026 [2024-07-26 01:16:26.281553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.026 [2024-07-26 01:16:26.284529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.026 [2024-07-26 01:16:26.293795] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.026 [2024-07-26 01:16:26.294228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.026 [2024-07-26 01:16:26.294260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:56.026 [2024-07-26 01:16:26.294277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:56.026 [2024-07-26 01:16:26.294535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:56.026 [2024-07-26 01:16:26.294730] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.026 [2024-07-26 01:16:26.294750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.026 [2024-07-26 01:16:26.294764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.026 [2024-07-26 01:16:26.297727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.026 [2024-07-26 01:16:26.307030] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.026 [2024-07-26 01:16:26.307580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.026 [2024-07-26 01:16:26.307618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:56.026 [2024-07-26 01:16:26.307637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:56.026 [2024-07-26 01:16:26.307904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:56.026 [2024-07-26 01:16:26.308130] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.026 [2024-07-26 01:16:26.308151] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.026 [2024-07-26 01:16:26.308167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.026 [2024-07-26 01:16:26.311131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.026 [2024-07-26 01:16:26.320409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.026 [2024-07-26 01:16:26.320800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.026 [2024-07-26 01:16:26.320828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:56.026 [2024-07-26 01:16:26.320845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:56.026 [2024-07-26 01:16:26.321109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:56.026 [2024-07-26 01:16:26.321323] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.026 [2024-07-26 01:16:26.321345] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.026 [2024-07-26 01:16:26.321359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.026 [2024-07-26 01:16:26.324437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.026 [2024-07-26 01:16:26.333804] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:56.026 [2024-07-26 01:16:26.333841] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:56.026 [2024-07-26 01:16:26.333839] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.026 [2024-07-26 01:16:26.333858] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:56.026 [2024-07-26 01:16:26.333873] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:56.026 [2024-07-26 01:16:26.333885] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:56.026 [2024-07-26 01:16:26.333974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:56.026 [2024-07-26 01:16:26.334027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:56.026 [2024-07-26 01:16:26.334029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:56.026 [2024-07-26 01:16:26.334273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.026 [2024-07-26 01:16:26.334306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:56.026 [2024-07-26 01:16:26.334326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:56.026 [2024-07-26 01:16:26.334566] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:56.027 [2024-07-26 01:16:26.334811] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.027 [2024-07-26 01:16:26.334836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.027 [2024-07-26 01:16:26.334852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.027 [2024-07-26 01:16:26.338457] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.027 [2024-07-26 01:16:26.347752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.027 [2024-07-26 01:16:26.348317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.027 [2024-07-26 01:16:26.348362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:56.027 [2024-07-26 01:16:26.348384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:56.027 [2024-07-26 01:16:26.348632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:56.027 [2024-07-26 01:16:26.348881] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.027 [2024-07-26 01:16:26.348906] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.027 [2024-07-26 01:16:26.348925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.027 [2024-07-26 01:16:26.352129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.027 [2024-07-26 01:16:26.361341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.027 [2024-07-26 01:16:26.361948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.027 [2024-07-26 01:16:26.361989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:56.027 [2024-07-26 01:16:26.362008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:56.027 [2024-07-26 01:16:26.362258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:56.027 [2024-07-26 01:16:26.362486] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.027 [2024-07-26 01:16:26.362508] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.027 [2024-07-26 01:16:26.362524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.027 [2024-07-26 01:16:26.365702] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.027 [2024-07-26 01:16:26.374851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.027 [2024-07-26 01:16:26.375439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.027 [2024-07-26 01:16:26.375480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:56.027 [2024-07-26 01:16:26.375500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:56.027 [2024-07-26 01:16:26.375751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:56.027 [2024-07-26 01:16:26.375961] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.027 [2024-07-26 01:16:26.375982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.027 [2024-07-26 01:16:26.375998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.027 [2024-07-26 01:16:26.379204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.027 [2024-07-26 01:16:26.388412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.027 [2024-07-26 01:16:26.388888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.027 [2024-07-26 01:16:26.388924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:56.027 [2024-07-26 01:16:26.388942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:56.027 [2024-07-26 01:16:26.389208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:56.027 [2024-07-26 01:16:26.389439] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.027 [2024-07-26 01:16:26.389461] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.027 [2024-07-26 01:16:26.389477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.027 [2024-07-26 01:16:26.392641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.027 [2024-07-26 01:16:26.401964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.027 [2024-07-26 01:16:26.402593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.027 [2024-07-26 01:16:26.402636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:56.027 [2024-07-26 01:16:26.402656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:56.027 [2024-07-26 01:16:26.402905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:56.027 [2024-07-26 01:16:26.403145] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.027 [2024-07-26 01:16:26.403169] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.027 [2024-07-26 01:16:26.403186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.027 [2024-07-26 01:16:26.406397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.027 [2024-07-26 01:16:26.415558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.027 [2024-07-26 01:16:26.416110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.027 [2024-07-26 01:16:26.416147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:56.027 [2024-07-26 01:16:26.416166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:56.027 [2024-07-26 01:16:26.416415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:56.027 [2024-07-26 01:16:26.416623] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.027 [2024-07-26 01:16:26.416645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.027 [2024-07-26 01:16:26.416661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.027 [2024-07-26 01:16:26.419877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.027 [2024-07-26 01:16:26.429158] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.027 [2024-07-26 01:16:26.429559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.027 [2024-07-26 01:16:26.429589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:56.027 [2024-07-26 01:16:26.429607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:56.027 [2024-07-26 01:16:26.429823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:56.027 [2024-07-26 01:16:26.430051] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.027 [2024-07-26 01:16:26.430097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.027 [2024-07-26 01:16:26.430113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.027 [2024-07-26 01:16:26.433385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.027 [2024-07-26 01:16:26.442762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.027 [2024-07-26 01:16:26.443147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.027 [2024-07-26 01:16:26.443177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:56.027 [2024-07-26 01:16:26.443195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:56.027 [2024-07-26 01:16:26.443411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:56.027 [2024-07-26 01:16:26.443640] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.027 [2024-07-26 01:16:26.443663] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.027 [2024-07-26 01:16:26.443677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.027 01:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:56.027 01:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:33:56.027 01:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:56.027 01:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:56.027 01:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:56.027 [2024-07-26 01:16:26.446971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.286 [2024-07-26 01:16:26.456257] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.286 [2024-07-26 01:16:26.456638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.286 [2024-07-26 01:16:26.456666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:56.286 [2024-07-26 01:16:26.456683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:56.286 [2024-07-26 01:16:26.456897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:56.286 [2024-07-26 01:16:26.457127] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.286 [2024-07-26 01:16:26.457150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.286 [2024-07-26 01:16:26.457164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.286 [2024-07-26 01:16:26.460463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.286 01:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:56.286 01:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:56.286 01:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.286 [2024-07-26 01:16:26.469879] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controlle 01:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:56.286 r 00:33:56.286 [2024-07-26 01:16:26.470251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.286 [2024-07-26 01:16:26.470280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:56.286 [2024-07-26 01:16:26.470296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:56.286 [2024-07-26 01:16:26.470543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:56.286 [2024-07-26 01:16:26.470750] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.286 [2024-07-26 01:16:26.470772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.286 [2024-07-26 01:16:26.470785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.286 [2024-07-26 01:16:26.473977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.286 [2024-07-26 01:16:26.475857] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:56.286 [2024-07-26 01:16:26.483461] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.286 [2024-07-26 01:16:26.483926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.286 [2024-07-26 01:16:26.483954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:56.286 [2024-07-26 01:16:26.483969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:56.286 [2024-07-26 01:16:26.484194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:56.286 [2024-07-26 01:16:26.484442] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.286 [2024-07-26 01:16:26.484463] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.286 [2024-07-26 01:16:26.484498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.286 [2024-07-26 01:16:26.487830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.286 01:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.286 01:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:56.286 01:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.286 01:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:56.286 [2024-07-26 01:16:26.496987] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.286 [2024-07-26 01:16:26.497352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.286 [2024-07-26 01:16:26.497381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:56.286 [2024-07-26 01:16:26.497399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:56.286 [2024-07-26 01:16:26.497629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:56.286 [2024-07-26 01:16:26.497853] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.286 [2024-07-26 01:16:26.497874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.286 [2024-07-26 01:16:26.497888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.286 [2024-07-26 01:16:26.501117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.286 [2024-07-26 01:16:26.510577] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.286 [2024-07-26 01:16:26.511181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.286 [2024-07-26 01:16:26.511220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:56.286 [2024-07-26 01:16:26.511239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:56.286 [2024-07-26 01:16:26.511500] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:56.286 [2024-07-26 01:16:26.511722] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.287 [2024-07-26 01:16:26.511743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.287 [2024-07-26 01:16:26.511759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.287 Malloc0 00:33:56.287 01:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.287 01:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:56.287 01:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.287 01:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:56.287 [2024-07-26 01:16:26.515028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.287 01:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.287 01:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:56.287 01:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.287 01:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:56.287 [2024-07-26 01:16:26.524233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.287 [2024-07-26 01:16:26.524691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.287 [2024-07-26 01:16:26.524720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824f70 with addr=10.0.0.2, port=4420 00:33:56.287 [2024-07-26 01:16:26.524736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f70 is same with the state(5) to be set 00:33:56.287 [2024-07-26 01:16:26.524965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824f70 (9): Bad file descriptor 00:33:56.287 [2024-07-26 01:16:26.525211] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.287 [2024-07-26 01:16:26.525235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.287 [2024-07-26 01:16:26.525250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.287 [2024-07-26 01:16:26.528487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.287 01:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.287 01:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:56.287 01:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.287 01:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:56.287 [2024-07-26 01:16:26.533624] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:56.287 01:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.287 01:16:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1980174 00:33:56.287 [2024-07-26 01:16:26.537860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.287 [2024-07-26 01:16:26.701905] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:06.257 00:34:06.257 Latency(us) 00:34:06.257 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:06.257 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:06.257 Verification LBA range: start 0x0 length 0x4000 00:34:06.257 Nvme1n1 : 15.01 6469.16 25.27 8852.61 0.00 8328.93 831.34 22622.06 00:34:06.257 =================================================================================================================== 00:34:06.257 Total : 6469.16 25.27 8852.61 0.00 8328.93 831.34 22622.06 00:34:06.257 01:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:34:06.257 01:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:06.257 01:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.257 01:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:06.257 01:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.257 01:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:34:06.257 01:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:34:06.257 01:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:06.258 01:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:34:06.258 01:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:06.258 01:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:34:06.258 01:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:06.258 01:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:06.258 rmmod nvme_tcp 00:34:06.258 rmmod nvme_fabrics 00:34:06.258 rmmod nvme_keyring 00:34:06.258 01:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:06.258 01:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:34:06.258 01:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:34:06.258 01:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1980836 ']' 00:34:06.258 01:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1980836 00:34:06.258 01:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 1980836 ']' 00:34:06.258 01:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 1980836 00:34:06.258 01:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:34:06.258 01:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:06.258 01:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1980836 00:34:06.258 01:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:06.258 01:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:06.258 01:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1980836' 00:34:06.258 killing process with pid 1980836 00:34:06.258 01:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 1980836 00:34:06.258 01:16:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 1980836 00:34:06.258 01:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:06.258 01:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:06.258 01:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:06.258 01:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:06.258 01:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:06.258 01:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:06.258 01:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:06.258 01:16:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:08.164 01:16:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:08.164 00:34:08.164 real 0m22.187s 00:34:08.164 user 0m59.340s 00:34:08.164 sys 0m4.290s 00:34:08.164 01:16:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:08.164 01:16:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:08.164 ************************************ 00:34:08.164 END TEST nvmf_bdevperf 00:34:08.164 ************************************ 00:34:08.164 01:16:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:08.164 01:16:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:08.164 01:16:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:08.164 01:16:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.164 ************************************ 00:34:08.164 START TEST nvmf_target_disconnect 00:34:08.164 ************************************ 00:34:08.164 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:08.164 * Looking for test storage... 00:34:08.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:08.164 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:08.164 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:34:08.164 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:08.164 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:08.164 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:08.164 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:08.164 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:08.164 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:08.164 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:08.164 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:08.164 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:08.164 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:08.164 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:08.164 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:08.164 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:08.164 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:08.164 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:08.164 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:08.164 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:08.164 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:08.164 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:08.164 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:08.164 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.165 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.165 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.165 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:34:08.165 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.165 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:34:08.165 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:08.165 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:08.165 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:08.165 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:08.165 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:08.165 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:08.165 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:08.165 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:08.165 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:34:08.165 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:34:08.165 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:34:08.165 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:34:08.165 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:08.165 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:08.165 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:08.165 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:08.165 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:08.165 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:08.165 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:08.165 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:08.165 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:08.165 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:08.165 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:34:08.165 01:16:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:10.063 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:10.063 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:34:10.063 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:10.063 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:10.063 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:10.063 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:10.063 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:10.063 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:34:10.063 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:10.063 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:34:10.063 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:34:10.063 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:34:10.063 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:34:10.063 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:34:10.063 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:34:10.063 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:10.063 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:10.063 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:10.063 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:10.063 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:10.063 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:10.063 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:10.063 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:10.063 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:10.063 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:10.063 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:10.063 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:10.063 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:10.063 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:10.063 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:10.064 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:10.064 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:10.064 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:10.064 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:10.064 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:10.064 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:34:10.064 00:34:10.064 --- 10.0.0.2 ping statistics --- 00:34:10.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:10.064 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:10.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:10.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:34:10.064 00:34:10.064 --- 10.0.0.1 ping statistics --- 00:34:10.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:10.064 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:10.064 ************************************ 00:34:10.064 START TEST nvmf_target_disconnect_tc1 00:34:10.064 ************************************ 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:10.064 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:34:10.065 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:10.065 EAL: No free 2048 kB hugepages reported on node 1 00:34:10.065 [2024-07-26 01:16:40.359548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.065 [2024-07-26 01:16:40.359627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x913590 with addr=10.0.0.2, port=4420 00:34:10.065 [2024-07-26 01:16:40.359666] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:10.065 [2024-07-26 01:16:40.359688] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:10.065 [2024-07-26 01:16:40.359703] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:34:10.065 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:34:10.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:34:10.065 Initializing NVMe Controllers 00:34:10.065 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:34:10.065 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:10.065 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:10.065 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:10.065 00:34:10.065 real 0m0.092s 00:34:10.065 user 0m0.036s 00:34:10.065 sys 0m0.056s 00:34:10.065 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:10.065 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:10.065 ************************************ 00:34:10.065 END TEST nvmf_target_disconnect_tc1 00:34:10.065 ************************************ 00:34:10.065 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:34:10.065 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:10.065 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:10.065 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:10.065 ************************************ 00:34:10.065 START TEST nvmf_target_disconnect_tc2 00:34:10.065 ************************************ 00:34:10.065 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:34:10.065 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:34:10.065 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:10.065 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:10.065 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:10.065 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:10.065 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1983977 00:34:10.065 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:10.065 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1983977 00:34:10.065 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1983977 ']' 00:34:10.065 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:10.065 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:10.065 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:10.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:10.065 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:10.065 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:10.065 [2024-07-26 01:16:40.468052] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:34:10.065 [2024-07-26 01:16:40.468133] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:10.323 EAL: No free 2048 kB hugepages reported on node 1 00:34:10.323 [2024-07-26 01:16:40.534713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:10.323 [2024-07-26 01:16:40.623536] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:10.323 [2024-07-26 01:16:40.623603] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:10.323 [2024-07-26 01:16:40.623617] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:10.323 [2024-07-26 01:16:40.623628] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:10.323 [2024-07-26 01:16:40.623638] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:10.323 [2024-07-26 01:16:40.623723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:34:10.323 [2024-07-26 01:16:40.623746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:34:10.323 [2024-07-26 01:16:40.623799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:34:10.323 [2024-07-26 01:16:40.623802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:34:10.323 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:10.323 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:34:10.323 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:10.323 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:10.323 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:10.581 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:10.581 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:10.581 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.581 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:10.581 Malloc0 00:34:10.581 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.581 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:10.581 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.581 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:10.581 [2024-07-26 01:16:40.798416] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:10.581 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.581 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:10.581 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.581 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:10.581 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.581 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:10.581 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.581 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:10.581 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.581 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:10.581 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.581 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:10.581 [2024-07-26 01:16:40.826681] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:10.581 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.581 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:10.581 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.581 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:10.581 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.581 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1984005 00:34:10.581 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:34:10.581 01:16:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:10.581 EAL: No free 2048 kB hugepages reported on node 1 00:34:12.549 01:16:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1983977 00:34:12.549 01:16:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Write completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Write completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Write completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Write completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Write completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Write completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Write completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Write completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Write completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Write completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 [2024-07-26 01:16:42.852659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Write completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Write completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Write completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Write completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Write completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 [2024-07-26 01:16:42.853014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.549 Read completed with error (sct=0, sc=8) 00:34:12.549 starting I/O failed 00:34:12.550 Read completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Write completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Read completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Read completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Write completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Write completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Read completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Write completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Read completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Read completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Write completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Write completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Write completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Write completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Write completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 [2024-07-26 01:16:42.853305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:12.550 Read completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Read completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Read completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Read completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Read completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Read completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Read completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Read completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Read completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Read completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Read completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Read completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Write completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Read completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Read completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Read completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Write completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Write completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Write completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Read completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Read completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Read completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Read completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Write completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Write completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Write completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Read completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Read completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Write completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Read completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Write completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 Write completed with error (sct=0, sc=8) 00:34:12.550 starting I/O failed 00:34:12.550 [2024-07-26 01:16:42.853584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.550 [2024-07-26 01:16:42.853816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-26 01:16:42.853858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-26 01:16:42.854024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-26 01:16:42.854051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-26 01:16:42.854219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-26 01:16:42.854245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-26 01:16:42.854389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-26 01:16:42.854415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-26 01:16:42.854524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-26 01:16:42.854550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-26 01:16:42.854694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-26 01:16:42.854726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-26 01:16:42.854846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-26 01:16:42.854891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-26 01:16:42.855081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-26 01:16:42.855108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-26 01:16:42.855223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-26 01:16:42.855249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-26 01:16:42.855380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-26 01:16:42.855406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-26 01:16:42.855518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-26 01:16:42.855561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-26 01:16:42.855687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-26 01:16:42.855715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-26 01:16:42.855952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-26 01:16:42.855994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-26 01:16:42.856154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-26 01:16:42.856180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-26 01:16:42.856288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-26 01:16:42.856314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-26 01:16:42.856419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-26 01:16:42.856445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-26 01:16:42.856572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-26 01:16:42.856597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-26 01:16:42.856776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-26 01:16:42.856802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-26 01:16:42.856986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-26 01:16:42.857026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.550 [2024-07-26 01:16:42.857172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.550 [2024-07-26 01:16:42.857213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.550 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-26 01:16:42.857332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-26 01:16:42.857363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-26 01:16:42.857499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-26 01:16:42.857529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-26 01:16:42.857656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-26 01:16:42.857685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-26 01:16:42.857821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-26 01:16:42.857867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-26 01:16:42.858072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-26 01:16:42.858131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-26 01:16:42.858282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-26 01:16:42.858310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-26 01:16:42.858427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-26 01:16:42.858472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-26 01:16:42.858600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-26 01:16:42.858650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-26 01:16:42.858837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-26 01:16:42.858886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-26 01:16:42.859017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-26 01:16:42.859043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-26 01:16:42.859169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-26 01:16:42.859207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-26 01:16:42.859330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-26 01:16:42.859368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-26 01:16:42.859505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-26 01:16:42.859536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-26 01:16:42.859738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-26 01:16:42.859764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-26 01:16:42.859941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-26 01:16:42.859967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-26 01:16:42.860100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-26 01:16:42.860126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-26 01:16:42.860230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-26 01:16:42.860257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-26 01:16:42.860366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-26 01:16:42.860392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-26 01:16:42.860499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-26 01:16:42.860525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-26 01:16:42.860693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-26 01:16:42.860719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-26 01:16:42.860821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-26 01:16:42.860846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-26 01:16:42.861016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-26 01:16:42.861067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-26 01:16:42.861203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-26 01:16:42.861230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-26 01:16:42.861363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-26 01:16:42.861388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-26 01:16:42.861493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-26 01:16:42.861520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-26 01:16:42.861658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-26 01:16:42.861684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-26 01:16:42.861809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-26 01:16:42.861834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-26 01:16:42.862005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-26 01:16:42.862032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-26 01:16:42.862183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-26 01:16:42.862209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-26 01:16:42.862310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-26 01:16:42.862336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-26 01:16:42.862465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-26 01:16:42.862491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-26 01:16:42.862655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-26 01:16:42.862698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-26 01:16:42.862815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-26 01:16:42.862859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-26 01:16:42.863021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-26 01:16:42.863048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-26 01:16:42.863161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-26 01:16:42.863187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-26 01:16:42.863296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-26 01:16:42.863329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.551 [2024-07-26 01:16:42.863491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.551 [2024-07-26 01:16:42.863522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.551 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-26 01:16:42.863670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-26 01:16:42.863700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-26 01:16:42.863807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-26 01:16:42.863836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-26 01:16:42.864053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-26 01:16:42.864117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-26 01:16:42.864252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-26 01:16:42.864280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-26 01:16:42.865476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-26 01:16:42.865540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-26 01:16:42.865774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-26 01:16:42.865827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-26 01:16:42.865976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-26 01:16:42.866019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-26 01:16:42.866190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-26 01:16:42.866216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-26 01:16:42.866332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-26 01:16:42.866357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-26 01:16:42.866464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-26 01:16:42.866490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-26 01:16:42.866622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-26 01:16:42.866648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-26 01:16:42.866842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-26 01:16:42.866870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-26 01:16:42.867026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-26 01:16:42.867052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-26 01:16:42.867184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-26 01:16:42.867210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-26 01:16:42.867316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-26 01:16:42.867342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-26 01:16:42.867502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-26 01:16:42.867528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-26 01:16:42.867735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-26 01:16:42.867761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-26 01:16:42.867889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-26 01:16:42.867915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-26 01:16:42.868043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-26 01:16:42.868094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-26 01:16:42.868246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-26 01:16:42.868275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-26 01:16:42.868425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-26 01:16:42.868452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-26 01:16:42.868710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-26 01:16:42.868753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-26 01:16:42.868872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-26 01:16:42.868900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-26 01:16:42.869017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-26 01:16:42.869044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-26 01:16:42.869169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-26 01:16:42.869197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-26 01:16:42.869362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-26 01:16:42.869389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-26 01:16:42.869529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-26 01:16:42.869556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-26 01:16:42.869728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-26 01:16:42.869770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-26 01:16:42.869885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-26 01:16:42.869911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-26 01:16:42.870141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-26 01:16:42.870175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-26 01:16:42.870288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-26 01:16:42.870315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-26 01:16:42.870456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-26 01:16:42.870483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-26 01:16:42.870626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-26 01:16:42.870653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-26 01:16:42.870827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-26 01:16:42.870855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-26 01:16:42.871018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-26 01:16:42.871044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-26 01:16:42.871163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-26 01:16:42.871191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.552 [2024-07-26 01:16:42.871330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.552 [2024-07-26 01:16:42.871357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.552 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-26 01:16:42.871490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-26 01:16:42.871516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-26 01:16:42.871657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-26 01:16:42.871684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-26 01:16:42.871847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-26 01:16:42.871874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-26 01:16:42.872009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-26 01:16:42.872036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-26 01:16:42.872206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-26 01:16:42.872233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-26 01:16:42.872362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-26 01:16:42.872390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-26 01:16:42.872530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-26 01:16:42.872557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-26 01:16:42.872686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-26 01:16:42.872712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-26 01:16:42.872868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-26 01:16:42.872897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-26 01:16:42.873076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-26 01:16:42.873111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-26 01:16:42.873237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-26 01:16:42.873263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-26 01:16:42.873427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-26 01:16:42.873454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-26 01:16:42.873598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-26 01:16:42.873625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-26 01:16:42.873803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-26 01:16:42.873830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-26 01:16:42.873931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-26 01:16:42.873957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-26 01:16:42.874093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-26 01:16:42.874121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-26 01:16:42.874257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-26 01:16:42.874285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-26 01:16:42.874430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-26 01:16:42.874457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-26 01:16:42.874560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-26 01:16:42.874588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-26 01:16:42.874736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-26 01:16:42.874763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-26 01:16:42.874932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-26 01:16:42.874961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-26 01:16:42.875106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-26 01:16:42.875133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-26 01:16:42.875274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-26 01:16:42.875301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-26 01:16:42.875433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-26 01:16:42.875460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-26 01:16:42.875589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-26 01:16:42.875615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-26 01:16:42.875719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-26 01:16:42.875745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-26 01:16:42.875858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-26 01:16:42.875884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-26 01:16:42.876019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-26 01:16:42.876045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-26 01:16:42.876199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-26 01:16:42.876226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-26 01:16:42.876354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-26 01:16:42.876381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-26 01:16:42.876484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-26 01:16:42.876511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-26 01:16:42.876611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-26 01:16:42.876637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-26 01:16:42.876767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-26 01:16:42.876798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-26 01:16:42.876961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-26 01:16:42.876988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-26 01:16:42.877115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.553 [2024-07-26 01:16:42.877143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.553 qpair failed and we were unable to recover it. 00:34:12.553 [2024-07-26 01:16:42.877264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-26 01:16:42.877293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-26 01:16:42.877432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-26 01:16:42.877458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-26 01:16:42.877622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-26 01:16:42.877650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-26 01:16:42.877790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-26 01:16:42.877817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-26 01:16:42.877989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-26 01:16:42.878016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-26 01:16:42.878131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-26 01:16:42.878158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-26 01:16:42.878294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-26 01:16:42.878321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-26 01:16:42.878464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-26 01:16:42.878490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-26 01:16:42.878653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-26 01:16:42.878679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-26 01:16:42.878829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-26 01:16:42.878858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-26 01:16:42.878995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-26 01:16:42.879021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-26 01:16:42.879164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-26 01:16:42.879191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-26 01:16:42.879302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-26 01:16:42.879328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-26 01:16:42.879526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-26 01:16:42.879552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-26 01:16:42.879664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-26 01:16:42.879690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-26 01:16:42.879799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-26 01:16:42.879826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-26 01:16:42.879962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-26 01:16:42.879989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-26 01:16:42.880131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-26 01:16:42.880158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-26 01:16:42.880271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-26 01:16:42.880298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-26 01:16:42.880432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-26 01:16:42.880459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-26 01:16:42.880601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-26 01:16:42.880628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-26 01:16:42.880760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-26 01:16:42.880786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-26 01:16:42.880915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-26 01:16:42.880942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-26 01:16:42.881092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-26 01:16:42.881119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-26 01:16:42.881266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-26 01:16:42.881293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-26 01:16:42.881454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-26 01:16:42.881480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-26 01:16:42.881590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-26 01:16:42.881617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-26 01:16:42.881781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-26 01:16:42.881807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-26 01:16:42.881949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-26 01:16:42.881976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-26 01:16:42.882141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-26 01:16:42.882169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-26 01:16:42.882316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.554 [2024-07-26 01:16:42.882345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.554 qpair failed and we were unable to recover it. 00:34:12.554 [2024-07-26 01:16:42.882448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-26 01:16:42.882475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-26 01:16:42.882615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-26 01:16:42.882641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-26 01:16:42.882811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-26 01:16:42.882837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-26 01:16:42.882970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-26 01:16:42.882997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-26 01:16:42.883119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-26 01:16:42.883146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-26 01:16:42.883286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-26 01:16:42.883312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-26 01:16:42.883480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-26 01:16:42.883510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-26 01:16:42.883670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-26 01:16:42.883697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-26 01:16:42.883836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-26 01:16:42.883865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-26 01:16:42.883999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-26 01:16:42.884026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-26 01:16:42.884147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-26 01:16:42.884174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-26 01:16:42.884338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-26 01:16:42.884365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-26 01:16:42.884501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-26 01:16:42.884529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-26 01:16:42.884669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-26 01:16:42.884696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-26 01:16:42.884800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-26 01:16:42.884828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-26 01:16:42.884934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-26 01:16:42.884961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-26 01:16:42.885081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-26 01:16:42.885109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-26 01:16:42.885243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-26 01:16:42.885270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-26 01:16:42.885523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-26 01:16:42.885552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-26 01:16:42.885697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-26 01:16:42.885725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-26 01:16:42.885869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-26 01:16:42.885897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-26 01:16:42.886035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-26 01:16:42.886069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-26 01:16:42.886225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-26 01:16:42.886252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-26 01:16:42.886389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-26 01:16:42.886417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-26 01:16:42.886579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-26 01:16:42.886607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-26 01:16:42.886707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-26 01:16:42.886734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-26 01:16:42.886870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-26 01:16:42.886913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-26 01:16:42.887093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-26 01:16:42.887141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-26 01:16:42.887277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-26 01:16:42.887304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-26 01:16:42.887466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-26 01:16:42.887493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-26 01:16:42.887633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-26 01:16:42.887662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-26 01:16:42.887826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-26 01:16:42.887853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-26 01:16:42.888007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-26 01:16:42.888034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-26 01:16:42.888178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-26 01:16:42.888205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-26 01:16:42.888359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-26 01:16:42.888388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-26 01:16:42.888549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-26 01:16:42.888576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.555 [2024-07-26 01:16:42.888689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.555 [2024-07-26 01:16:42.888717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.555 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-26 01:16:42.888822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-26 01:16:42.888849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-26 01:16:42.889010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-26 01:16:42.889037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-26 01:16:42.889181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-26 01:16:42.889209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-26 01:16:42.889325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-26 01:16:42.889353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-26 01:16:42.889452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-26 01:16:42.889480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-26 01:16:42.889640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-26 01:16:42.889667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-26 01:16:42.889772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-26 01:16:42.889817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-26 01:16:42.889981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-26 01:16:42.890008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-26 01:16:42.890119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-26 01:16:42.890147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-26 01:16:42.890286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-26 01:16:42.890317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-26 01:16:42.890467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-26 01:16:42.890507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-26 01:16:42.890733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-26 01:16:42.890761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-26 01:16:42.890925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-26 01:16:42.890951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-26 01:16:42.891088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-26 01:16:42.891115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-26 01:16:42.891230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-26 01:16:42.891256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-26 01:16:42.891424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-26 01:16:42.891450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-26 01:16:42.891621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-26 01:16:42.891648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-26 01:16:42.891812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-26 01:16:42.891839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-26 01:16:42.891973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-26 01:16:42.892000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-26 01:16:42.892137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-26 01:16:42.892164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-26 01:16:42.892309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-26 01:16:42.892336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-26 01:16:42.892471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-26 01:16:42.892498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-26 01:16:42.892662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-26 01:16:42.892689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-26 01:16:42.892844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-26 01:16:42.892871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-26 01:16:42.893028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-26 01:16:42.893055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-26 01:16:42.893234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-26 01:16:42.893262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-26 01:16:42.893403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-26 01:16:42.893429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-26 01:16:42.893559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-26 01:16:42.893585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-26 01:16:42.893748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-26 01:16:42.893775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-26 01:16:42.893873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-26 01:16:42.893900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-26 01:16:42.894038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-26 01:16:42.894071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-26 01:16:42.894186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-26 01:16:42.894214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-26 01:16:42.894379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-26 01:16:42.894406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-26 01:16:42.894561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-26 01:16:42.894591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-26 01:16:42.894753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-26 01:16:42.894781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-26 01:16:42.894918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.556 [2024-07-26 01:16:42.894944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.556 qpair failed and we were unable to recover it. 00:34:12.556 [2024-07-26 01:16:42.895085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-26 01:16:42.895129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-26 01:16:42.895281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-26 01:16:42.895311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-26 01:16:42.895463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-26 01:16:42.895490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-26 01:16:42.895626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-26 01:16:42.895652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-26 01:16:42.895766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-26 01:16:42.895792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-26 01:16:42.895903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-26 01:16:42.895929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-26 01:16:42.896091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-26 01:16:42.896119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-26 01:16:42.896277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-26 01:16:42.896306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-26 01:16:42.896463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-26 01:16:42.896488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-26 01:16:42.896625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-26 01:16:42.896652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-26 01:16:42.896821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-26 01:16:42.896847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-26 01:16:42.896984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-26 01:16:42.897010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-26 01:16:42.897146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-26 01:16:42.897172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-26 01:16:42.897309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-26 01:16:42.897339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-26 01:16:42.897471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-26 01:16:42.897498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-26 01:16:42.897635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-26 01:16:42.897661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-26 01:16:42.897823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-26 01:16:42.897852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-26 01:16:42.898011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-26 01:16:42.898038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-26 01:16:42.898205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-26 01:16:42.898230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-26 01:16:42.898366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-26 01:16:42.898392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-26 01:16:42.898531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-26 01:16:42.898559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-26 01:16:42.898700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-26 01:16:42.898727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-26 01:16:42.898863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-26 01:16:42.898890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-26 01:16:42.899004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-26 01:16:42.899031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-26 01:16:42.899161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-26 01:16:42.899189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-26 01:16:42.899355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-26 01:16:42.899381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-26 01:16:42.899519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-26 01:16:42.899545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-26 01:16:42.899713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-26 01:16:42.899739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-26 01:16:42.899903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-26 01:16:42.899929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-26 01:16:42.900091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-26 01:16:42.900118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-26 01:16:42.900280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-26 01:16:42.900310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-26 01:16:42.900485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-26 01:16:42.900514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-26 01:16:42.900675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-26 01:16:42.900700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-26 01:16:42.900797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-26 01:16:42.900823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-26 01:16:42.900996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-26 01:16:42.901022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-26 01:16:42.901162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-26 01:16:42.901188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.557 [2024-07-26 01:16:42.901330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.557 [2024-07-26 01:16:42.901357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.557 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-26 01:16:42.901560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-26 01:16:42.901586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-26 01:16:42.901747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-26 01:16:42.901772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-26 01:16:42.901923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-26 01:16:42.901954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-26 01:16:42.902128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-26 01:16:42.902155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-26 01:16:42.902316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-26 01:16:42.902342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-26 01:16:42.902506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-26 01:16:42.902550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-26 01:16:42.902689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-26 01:16:42.902717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-26 01:16:42.902855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-26 01:16:42.902882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-26 01:16:42.903021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-26 01:16:42.903046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-26 01:16:42.903198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-26 01:16:42.903225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-26 01:16:42.903335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-26 01:16:42.903366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-26 01:16:42.903478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-26 01:16:42.903505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-26 01:16:42.903647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-26 01:16:42.903675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-26 01:16:42.903806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-26 01:16:42.903832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-26 01:16:42.903935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-26 01:16:42.903961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-26 01:16:42.904095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-26 01:16:42.904131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-26 01:16:42.904335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-26 01:16:42.904365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-26 01:16:42.904524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-26 01:16:42.904568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-26 01:16:42.904729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-26 01:16:42.904757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-26 01:16:42.904871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-26 01:16:42.904897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-26 01:16:42.905037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-26 01:16:42.905070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-26 01:16:42.905207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-26 01:16:42.905234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-26 01:16:42.905405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-26 01:16:42.905432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-26 01:16:42.905589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-26 01:16:42.905618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-26 01:16:42.905752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-26 01:16:42.905777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-26 01:16:42.905935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-26 01:16:42.905962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-26 01:16:42.906101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-26 01:16:42.906135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-26 01:16:42.906279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-26 01:16:42.906306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-26 01:16:42.906444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-26 01:16:42.906478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-26 01:16:42.906618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-26 01:16:42.906645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-26 01:16:42.906762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.558 [2024-07-26 01:16:42.906787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.558 qpair failed and we were unable to recover it. 00:34:12.558 [2024-07-26 01:16:42.906923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-26 01:16:42.906950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-26 01:16:42.907086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-26 01:16:42.907121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-26 01:16:42.907233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-26 01:16:42.907259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-26 01:16:42.907412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-26 01:16:42.907439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-26 01:16:42.907602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-26 01:16:42.907629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-26 01:16:42.907801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-26 01:16:42.907841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-26 01:16:42.908026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-26 01:16:42.908073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-26 01:16:42.908188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-26 01:16:42.908217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-26 01:16:42.908360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-26 01:16:42.908387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-26 01:16:42.908499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-26 01:16:42.908530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-26 01:16:42.908707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-26 01:16:42.908734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-26 01:16:42.908894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-26 01:16:42.908921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-26 01:16:42.909078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-26 01:16:42.909121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-26 01:16:42.909259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-26 01:16:42.909287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-26 01:16:42.909438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-26 01:16:42.909465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-26 01:16:42.909600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-26 01:16:42.909626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-26 01:16:42.909757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-26 01:16:42.909787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-26 01:16:42.910014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-26 01:16:42.910043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-26 01:16:42.910246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-26 01:16:42.910273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-26 01:16:42.910428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-26 01:16:42.910457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-26 01:16:42.910659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-26 01:16:42.910721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-26 01:16:42.910880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-26 01:16:42.910907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-26 01:16:42.911070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-26 01:16:42.911098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-26 01:16:42.911209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-26 01:16:42.911236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-26 01:16:42.911393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-26 01:16:42.911423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-26 01:16:42.911578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-26 01:16:42.911612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-26 01:16:42.911867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-26 01:16:42.911920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-26 01:16:42.912106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-26 01:16:42.912133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-26 01:16:42.912241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-26 01:16:42.912268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-26 01:16:42.912382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-26 01:16:42.912409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-26 01:16:42.912568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-26 01:16:42.912632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-26 01:16:42.912795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-26 01:16:42.912823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-26 01:16:42.912930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-26 01:16:42.912955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-26 01:16:42.913134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-26 01:16:42.913175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-26 01:16:42.913292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-26 01:16:42.913332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-26 01:16:42.913494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.559 [2024-07-26 01:16:42.913539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.559 qpair failed and we were unable to recover it. 00:34:12.559 [2024-07-26 01:16:42.913713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-26 01:16:42.913740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-26 01:16:42.913872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-26 01:16:42.913898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-26 01:16:42.914013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-26 01:16:42.914040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-26 01:16:42.914169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-26 01:16:42.914196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-26 01:16:42.914353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-26 01:16:42.914381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-26 01:16:42.914509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-26 01:16:42.914538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-26 01:16:42.914766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-26 01:16:42.914794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-26 01:16:42.914907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-26 01:16:42.914932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-26 01:16:42.915032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-26 01:16:42.915066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-26 01:16:42.915185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-26 01:16:42.915213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-26 01:16:42.915361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-26 01:16:42.915387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-26 01:16:42.915547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-26 01:16:42.915573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-26 01:16:42.915694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-26 01:16:42.915722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-26 01:16:42.915842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-26 01:16:42.915869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-26 01:16:42.916009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-26 01:16:42.916036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-26 01:16:42.916206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-26 01:16:42.916246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-26 01:16:42.916436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-26 01:16:42.916481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-26 01:16:42.916633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-26 01:16:42.916677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-26 01:16:42.916882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-26 01:16:42.916929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-26 01:16:42.917068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-26 01:16:42.917095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-26 01:16:42.917227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-26 01:16:42.917253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-26 01:16:42.917387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-26 01:16:42.917413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-26 01:16:42.917560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-26 01:16:42.917589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-26 01:16:42.917710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-26 01:16:42.917753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-26 01:16:42.917923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-26 01:16:42.917953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-26 01:16:42.918101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-26 01:16:42.918127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-26 01:16:42.918230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-26 01:16:42.918254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-26 01:16:42.918404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-26 01:16:42.918434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-26 01:16:42.918720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-26 01:16:42.918775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-26 01:16:42.918925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-26 01:16:42.918954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-26 01:16:42.919137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-26 01:16:42.919166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-26 01:16:42.919303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-26 01:16:42.919344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-26 01:16:42.919546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-26 01:16:42.919591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-26 01:16:42.919846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-26 01:16:42.919897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-26 01:16:42.920000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-26 01:16:42.920025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-26 01:16:42.920139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-26 01:16:42.920165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-26 01:16:42.920289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-26 01:16:42.920319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.560 qpair failed and we were unable to recover it. 00:34:12.560 [2024-07-26 01:16:42.920496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.560 [2024-07-26 01:16:42.920524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-26 01:16:42.920671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-26 01:16:42.920701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-26 01:16:42.920854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-26 01:16:42.920881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-26 01:16:42.921018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-26 01:16:42.921045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-26 01:16:42.921184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-26 01:16:42.921229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-26 01:16:42.921395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-26 01:16:42.921426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-26 01:16:42.921680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-26 01:16:42.921740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-26 01:16:42.921967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-26 01:16:42.922017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-26 01:16:42.922193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-26 01:16:42.922220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-26 01:16:42.922346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-26 01:16:42.922376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-26 01:16:42.922495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-26 01:16:42.922530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-26 01:16:42.922750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-26 01:16:42.922808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-26 01:16:42.922989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-26 01:16:42.923018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-26 01:16:42.923173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-26 01:16:42.923199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-26 01:16:42.923329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-26 01:16:42.923356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-26 01:16:42.923496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-26 01:16:42.923538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-26 01:16:42.923691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-26 01:16:42.923720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-26 01:16:42.923865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-26 01:16:42.923895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-26 01:16:42.924023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-26 01:16:42.924075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-26 01:16:42.924250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-26 01:16:42.924278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-26 01:16:42.924524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-26 01:16:42.924574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-26 01:16:42.924776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-26 01:16:42.924827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-26 01:16:42.924948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-26 01:16:42.924975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-26 01:16:42.925148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-26 01:16:42.925175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-26 01:16:42.925316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-26 01:16:42.925363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-26 01:16:42.925509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-26 01:16:42.925538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-26 01:16:42.925754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-26 01:16:42.925819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-26 01:16:42.925991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-26 01:16:42.926020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-26 01:16:42.926195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-26 01:16:42.926221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-26 01:16:42.926326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-26 01:16:42.926355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-26 01:16:42.926507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-26 01:16:42.926537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-26 01:16:42.926707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-26 01:16:42.926736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-26 01:16:42.926891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-26 01:16:42.926920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-26 01:16:42.927036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-26 01:16:42.927072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-26 01:16:42.927234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-26 01:16:42.927259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-26 01:16:42.927408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-26 01:16:42.927435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-26 01:16:42.927547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.561 [2024-07-26 01:16:42.927592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.561 qpair failed and we were unable to recover it. 00:34:12.561 [2024-07-26 01:16:42.927743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-26 01:16:42.927772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-26 01:16:42.927922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-26 01:16:42.927952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-26 01:16:42.928114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-26 01:16:42.928142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-26 01:16:42.928272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-26 01:16:42.928299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-26 01:16:42.928504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-26 01:16:42.928532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-26 01:16:42.928682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-26 01:16:42.928712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-26 01:16:42.928851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-26 01:16:42.928881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-26 01:16:42.929057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-26 01:16:42.929115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-26 01:16:42.929230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-26 01:16:42.929258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-26 01:16:42.929376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-26 01:16:42.929409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-26 01:16:42.929592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-26 01:16:42.929637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-26 01:16:42.929832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-26 01:16:42.929893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-26 01:16:42.930063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-26 01:16:42.930091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-26 01:16:42.930230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-26 01:16:42.930257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-26 01:16:42.930441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-26 01:16:42.930486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-26 01:16:42.930640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-26 01:16:42.930684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-26 01:16:42.930845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-26 01:16:42.930889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-26 01:16:42.931037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-26 01:16:42.931083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-26 01:16:42.931243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-26 01:16:42.931273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-26 01:16:42.931396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-26 01:16:42.931427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-26 01:16:42.931553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-26 01:16:42.931581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-26 01:16:42.931731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-26 01:16:42.931760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-26 01:16:42.931889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-26 01:16:42.931919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-26 01:16:42.932062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-26 01:16:42.932091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-26 01:16:42.932252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-26 01:16:42.932279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-26 01:16:42.932443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-26 01:16:42.932487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-26 01:16:42.932651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-26 01:16:42.932698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-26 01:16:42.932839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-26 01:16:42.932867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-26 01:16:42.933008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-26 01:16:42.933035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-26 01:16:42.933237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-26 01:16:42.933268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-26 01:16:42.933427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-26 01:16:42.933457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-26 01:16:42.933604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-26 01:16:42.933633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-26 01:16:42.933760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-26 01:16:42.933800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-26 01:16:42.933919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-26 01:16:42.933944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-26 01:16:42.934072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-26 01:16:42.934110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-26 01:16:42.934261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-26 01:16:42.934290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.562 [2024-07-26 01:16:42.934430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.562 [2024-07-26 01:16:42.934464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.562 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-26 01:16:42.934616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-26 01:16:42.934645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-26 01:16:42.934855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-26 01:16:42.934915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-26 01:16:42.935071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-26 01:16:42.935116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-26 01:16:42.935227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-26 01:16:42.935253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-26 01:16:42.935417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-26 01:16:42.935464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-26 01:16:42.935620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-26 01:16:42.935650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-26 01:16:42.935915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-26 01:16:42.935967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-26 01:16:42.936117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-26 01:16:42.936144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-26 01:16:42.936277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-26 01:16:42.936321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-26 01:16:42.936455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-26 01:16:42.936502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-26 01:16:42.936650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-26 01:16:42.936694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-26 01:16:42.936843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-26 01:16:42.936870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-26 01:16:42.937007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-26 01:16:42.937034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-26 01:16:42.937233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-26 01:16:42.937263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-26 01:16:42.937449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-26 01:16:42.937479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-26 01:16:42.937607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-26 01:16:42.937635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-26 01:16:42.937762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-26 01:16:42.937792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-26 01:16:42.937917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-26 01:16:42.937947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-26 01:16:42.938086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-26 01:16:42.938123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-26 01:16:42.938295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-26 01:16:42.938330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-26 01:16:42.938464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-26 01:16:42.938493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-26 01:16:42.938655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-26 01:16:42.938692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-26 01:16:42.938867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-26 01:16:42.938907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-26 01:16:42.939064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-26 01:16:42.939107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-26 01:16:42.939223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-26 01:16:42.939248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-26 01:16:42.939383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-26 01:16:42.939431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-26 01:16:42.939594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-26 01:16:42.939651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-26 01:16:42.939837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-26 01:16:42.939895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-26 01:16:42.940072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-26 01:16:42.940111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-26 01:16:42.940255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-26 01:16:42.940283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-26 01:16:42.940454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-26 01:16:42.940484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.563 [2024-07-26 01:16:42.940686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.563 [2024-07-26 01:16:42.940736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.563 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-26 01:16:42.940851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-26 01:16:42.940878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-26 01:16:42.941021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-26 01:16:42.941048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-26 01:16:42.941195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-26 01:16:42.941224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-26 01:16:42.941405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-26 01:16:42.941434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-26 01:16:42.941613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-26 01:16:42.941660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-26 01:16:42.941790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-26 01:16:42.941837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-26 01:16:42.941966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-26 01:16:42.941994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-26 01:16:42.942180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-26 01:16:42.942207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-26 01:16:42.942396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-26 01:16:42.942426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-26 01:16:42.942532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-26 01:16:42.942561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-26 01:16:42.942709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-26 01:16:42.942738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-26 01:16:42.942852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-26 01:16:42.942882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-26 01:16:42.943029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-26 01:16:42.943064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-26 01:16:42.943221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-26 01:16:42.943247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-26 01:16:42.943402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-26 01:16:42.943431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-26 01:16:42.943602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-26 01:16:42.943631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-26 01:16:42.943775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-26 01:16:42.943804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-26 01:16:42.943954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-26 01:16:42.943980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-26 01:16:42.944100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-26 01:16:42.944127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-26 01:16:42.944229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-26 01:16:42.944256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-26 01:16:42.944390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-26 01:16:42.944415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-26 01:16:42.944579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-26 01:16:42.944621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-26 01:16:42.944772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-26 01:16:42.944811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-26 01:16:42.944988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-26 01:16:42.945017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-26 01:16:42.945179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-26 01:16:42.945205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-26 01:16:42.945372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-26 01:16:42.945401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-26 01:16:42.945576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-26 01:16:42.945640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-26 01:16:42.945788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-26 01:16:42.945816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-26 01:16:42.945960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-26 01:16:42.945989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-26 01:16:42.946125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-26 01:16:42.946156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-26 01:16:42.946294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-26 01:16:42.946345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-26 01:16:42.946498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-26 01:16:42.946527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-26 01:16:42.946690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-26 01:16:42.946719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-26 01:16:42.946862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-26 01:16:42.946891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-26 01:16:42.947067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-26 01:16:42.947127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-26 01:16:42.947262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-26 01:16:42.947288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-26 01:16:42.947450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.564 [2024-07-26 01:16:42.947479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.564 qpair failed and we were unable to recover it. 00:34:12.564 [2024-07-26 01:16:42.947642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-26 01:16:42.947685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-26 01:16:42.947880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-26 01:16:42.947909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-26 01:16:42.948055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-26 01:16:42.948107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-26 01:16:42.948244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-26 01:16:42.948271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-26 01:16:42.948404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-26 01:16:42.948429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-26 01:16:42.948565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-26 01:16:42.948594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-26 01:16:42.948719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-26 01:16:42.948748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-26 01:16:42.948893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-26 01:16:42.948921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-26 01:16:42.949037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-26 01:16:42.949078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-26 01:16:42.949212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-26 01:16:42.949238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-26 01:16:42.949356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-26 01:16:42.949380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-26 01:16:42.949510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-26 01:16:42.949535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-26 01:16:42.949667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-26 01:16:42.949697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-26 01:16:42.949870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-26 01:16:42.949899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-26 01:16:42.950018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-26 01:16:42.950046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-26 01:16:42.950234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-26 01:16:42.950259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-26 01:16:42.950368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-26 01:16:42.950393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-26 01:16:42.950573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-26 01:16:42.950602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-26 01:16:42.950745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-26 01:16:42.950773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-26 01:16:42.950917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-26 01:16:42.950944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-26 01:16:42.951129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-26 01:16:42.951156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-26 01:16:42.951280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-26 01:16:42.951305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-26 01:16:42.951467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-26 01:16:42.951495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-26 01:16:42.951641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-26 01:16:42.951669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-26 01:16:42.951784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-26 01:16:42.951812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-26 01:16:42.951981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-26 01:16:42.952022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-26 01:16:42.952200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-26 01:16:42.952228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-26 01:16:42.952367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-26 01:16:42.952412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-26 01:16:42.952597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-26 01:16:42.952641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-26 01:16:42.952827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-26 01:16:42.952870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-26 01:16:42.953033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-26 01:16:42.953068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-26 01:16:42.953242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-26 01:16:42.953269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-26 01:16:42.953453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-26 01:16:42.953480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-26 01:16:42.953638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-26 01:16:42.953680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-26 01:16:42.953815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-26 01:16:42.953857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-26 01:16:42.953989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-26 01:16:42.954017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-26 01:16:42.954160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.565 [2024-07-26 01:16:42.954186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.565 qpair failed and we were unable to recover it. 00:34:12.565 [2024-07-26 01:16:42.954323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-26 01:16:42.954349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-26 01:16:42.954538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-26 01:16:42.954588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-26 01:16:42.954712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-26 01:16:42.954756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-26 01:16:42.954887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-26 01:16:42.954913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-26 01:16:42.955147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-26 01:16:42.955190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-26 01:16:42.955323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-26 01:16:42.955350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-26 01:16:42.955508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-26 01:16:42.955534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-26 01:16:42.955644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-26 01:16:42.955670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-26 01:16:42.955807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-26 01:16:42.955834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-26 01:16:42.955980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-26 01:16:42.956006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-26 01:16:42.956172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-26 01:16:42.956217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-26 01:16:42.956343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-26 01:16:42.956388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-26 01:16:42.956570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-26 01:16:42.956614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-26 01:16:42.956777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-26 01:16:42.956804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-26 01:16:42.956965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-26 01:16:42.956991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-26 01:16:42.957179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-26 01:16:42.957225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-26 01:16:42.957371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-26 01:16:42.957415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-26 01:16:42.957574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-26 01:16:42.957617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-26 01:16:42.957758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-26 01:16:42.957784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-26 01:16:42.957922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-26 01:16:42.957948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-26 01:16:42.958069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-26 01:16:42.958106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-26 01:16:42.958255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-26 01:16:42.958299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-26 01:16:42.958487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-26 01:16:42.958515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-26 01:16:42.958706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-26 01:16:42.958734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-26 01:16:42.958870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-26 01:16:42.958896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-26 01:16:42.959037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-26 01:16:42.959069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-26 01:16:42.959193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-26 01:16:42.959219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-26 01:16:42.959378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-26 01:16:42.959404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-26 01:16:42.959566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-26 01:16:42.959592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-26 01:16:42.959762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-26 01:16:42.959789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-26 01:16:42.959952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-26 01:16:42.959979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-26 01:16:42.960126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-26 01:16:42.960155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-26 01:16:42.960304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-26 01:16:42.960348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-26 01:16:42.960498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-26 01:16:42.960527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-26 01:16:42.960740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-26 01:16:42.960767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-26 01:16:42.960904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-26 01:16:42.960931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-26 01:16:42.961095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.566 [2024-07-26 01:16:42.961121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.566 qpair failed and we were unable to recover it. 00:34:12.566 [2024-07-26 01:16:42.961254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-26 01:16:42.961298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-26 01:16:42.961458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-26 01:16:42.961503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-26 01:16:42.961641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-26 01:16:42.961668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-26 01:16:42.961803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-26 01:16:42.961829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-26 01:16:42.961967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-26 01:16:42.961993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-26 01:16:42.962146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-26 01:16:42.962191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-26 01:16:42.962339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-26 01:16:42.962382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-26 01:16:42.962566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-26 01:16:42.962610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-26 01:16:42.962749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-26 01:16:42.962776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-26 01:16:42.962891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-26 01:16:42.962917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-26 01:16:42.963064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-26 01:16:42.963090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-26 01:16:42.963216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-26 01:16:42.963246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-26 01:16:42.963423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-26 01:16:42.963467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-26 01:16:42.963604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-26 01:16:42.963631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-26 01:16:42.963765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-26 01:16:42.963792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-26 01:16:42.963929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-26 01:16:42.963954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-26 01:16:42.964072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-26 01:16:42.964100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-26 01:16:42.964257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-26 01:16:42.964301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-26 01:16:42.964436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-26 01:16:42.964479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-26 01:16:42.964588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-26 01:16:42.964616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-26 01:16:42.964742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-26 01:16:42.964781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-26 01:16:42.964923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-26 01:16:42.964950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-26 01:16:42.965098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-26 01:16:42.965128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-26 01:16:42.965278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-26 01:16:42.965307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-26 01:16:42.965441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-26 01:16:42.965469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-26 01:16:42.965592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-26 01:16:42.965620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-26 01:16:42.965821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-26 01:16:42.965850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-26 01:16:42.965997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-26 01:16:42.966025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-26 01:16:42.966189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.567 [2024-07-26 01:16:42.966214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.567 qpair failed and we were unable to recover it. 00:34:12.567 [2024-07-26 01:16:42.966372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.847 [2024-07-26 01:16:42.966400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.847 qpair failed and we were unable to recover it. 00:34:12.847 [2024-07-26 01:16:42.966546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.847 [2024-07-26 01:16:42.966574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.847 qpair failed and we were unable to recover it. 00:34:12.847 [2024-07-26 01:16:42.966724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.847 [2024-07-26 01:16:42.966759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.847 qpair failed and we were unable to recover it. 00:34:12.847 [2024-07-26 01:16:42.966887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.847 [2024-07-26 01:16:42.966915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.847 qpair failed and we were unable to recover it. 00:34:12.847 [2024-07-26 01:16:42.967067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.847 [2024-07-26 01:16:42.967114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.847 qpair failed and we were unable to recover it. 00:34:12.847 [2024-07-26 01:16:42.967249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.847 [2024-07-26 01:16:42.967276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.847 qpair failed and we were unable to recover it. 00:34:12.847 [2024-07-26 01:16:42.967411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.847 [2024-07-26 01:16:42.967457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.847 qpair failed and we were unable to recover it. 00:34:12.847 [2024-07-26 01:16:42.967614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.847 [2024-07-26 01:16:42.967643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.847 qpair failed and we were unable to recover it. 00:34:12.847 [2024-07-26 01:16:42.967819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.847 [2024-07-26 01:16:42.967863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.847 qpair failed and we were unable to recover it. 00:34:12.847 [2024-07-26 01:16:42.967977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.847 [2024-07-26 01:16:42.968003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.847 qpair failed and we were unable to recover it. 00:34:12.848 [2024-07-26 01:16:42.968183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.848 [2024-07-26 01:16:42.968228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.848 qpair failed and we were unable to recover it. 00:34:12.848 [2024-07-26 01:16:42.968355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.848 [2024-07-26 01:16:42.968401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.848 qpair failed and we were unable to recover it. 00:34:12.848 [2024-07-26 01:16:42.968534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.848 [2024-07-26 01:16:42.968561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.848 qpair failed and we were unable to recover it. 00:34:12.848 [2024-07-26 01:16:42.968723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.848 [2024-07-26 01:16:42.968751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.848 qpair failed and we were unable to recover it. 00:34:12.848 [2024-07-26 01:16:42.968897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.848 [2024-07-26 01:16:42.968924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.848 qpair failed and we were unable to recover it. 00:34:12.848 [2024-07-26 01:16:42.969064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.848 [2024-07-26 01:16:42.969093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.848 qpair failed and we were unable to recover it. 00:34:12.848 [2024-07-26 01:16:42.969240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.848 [2024-07-26 01:16:42.969266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.848 qpair failed and we were unable to recover it. 00:34:12.848 [2024-07-26 01:16:42.969388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.848 [2024-07-26 01:16:42.969416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.848 qpair failed and we were unable to recover it. 00:34:12.848 [2024-07-26 01:16:42.969530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.848 [2024-07-26 01:16:42.969560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.848 qpair failed and we were unable to recover it. 00:34:12.848 [2024-07-26 01:16:42.969733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.848 [2024-07-26 01:16:42.969762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.848 qpair failed and we were unable to recover it. 00:34:12.848 [2024-07-26 01:16:42.969909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.848 [2024-07-26 01:16:42.969937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.848 qpair failed and we were unable to recover it. 00:34:12.848 [2024-07-26 01:16:42.970093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.848 [2024-07-26 01:16:42.970120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.848 qpair failed and we were unable to recover it. 00:34:12.848 [2024-07-26 01:16:42.970232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.848 [2024-07-26 01:16:42.970258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.848 qpair failed and we were unable to recover it. 00:34:12.848 [2024-07-26 01:16:42.970409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.848 [2024-07-26 01:16:42.970437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.848 qpair failed and we were unable to recover it. 00:34:12.848 [2024-07-26 01:16:42.970574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.848 [2024-07-26 01:16:42.970617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.848 qpair failed and we were unable to recover it. 00:34:12.848 [2024-07-26 01:16:42.970795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.848 [2024-07-26 01:16:42.970824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.848 qpair failed and we were unable to recover it. 00:34:12.848 [2024-07-26 01:16:42.970941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.848 [2024-07-26 01:16:42.970969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.848 qpair failed and we were unable to recover it. 00:34:12.848 [2024-07-26 01:16:42.971127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.848 [2024-07-26 01:16:42.971153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.848 qpair failed and we were unable to recover it. 00:34:12.848 [2024-07-26 01:16:42.971313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.848 [2024-07-26 01:16:42.971354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.848 qpair failed and we were unable to recover it. 00:34:12.848 [2024-07-26 01:16:42.971522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.848 [2024-07-26 01:16:42.971555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.848 qpair failed and we were unable to recover it. 00:34:12.848 [2024-07-26 01:16:42.971738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.848 [2024-07-26 01:16:42.971788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.848 qpair failed and we were unable to recover it. 00:34:12.848 [2024-07-26 01:16:42.971907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.848 [2024-07-26 01:16:42.971934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.848 qpair failed and we were unable to recover it. 00:34:12.848 [2024-07-26 01:16:42.972097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.848 [2024-07-26 01:16:42.972124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.848 qpair failed and we were unable to recover it. 00:34:12.848 [2024-07-26 01:16:42.972237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.848 [2024-07-26 01:16:42.972263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.848 qpair failed and we were unable to recover it. 00:34:12.848 [2024-07-26 01:16:42.972397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.848 [2024-07-26 01:16:42.972423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.848 qpair failed and we were unable to recover it. 00:34:12.848 [2024-07-26 01:16:42.972570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.848 [2024-07-26 01:16:42.972599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.848 qpair failed and we were unable to recover it. 00:34:12.848 [2024-07-26 01:16:42.972803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.848 [2024-07-26 01:16:42.972832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.848 qpair failed and we were unable to recover it. 00:34:12.848 [2024-07-26 01:16:42.973001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.848 [2024-07-26 01:16:42.973029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.848 qpair failed and we were unable to recover it. 00:34:12.848 [2024-07-26 01:16:42.973169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.848 [2024-07-26 01:16:42.973195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.848 qpair failed and we were unable to recover it. 00:34:12.848 [2024-07-26 01:16:42.973350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.848 [2024-07-26 01:16:42.973379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.848 qpair failed and we were unable to recover it. 00:34:12.848 [2024-07-26 01:16:42.973549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.848 [2024-07-26 01:16:42.973578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.848 qpair failed and we were unable to recover it. 00:34:12.848 [2024-07-26 01:16:42.973706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.848 [2024-07-26 01:16:42.973733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.848 qpair failed and we were unable to recover it. 00:34:12.848 [2024-07-26 01:16:42.973949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.848 [2024-07-26 01:16:42.974008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.848 qpair failed and we were unable to recover it. 00:34:12.848 [2024-07-26 01:16:42.974161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.848 [2024-07-26 01:16:42.974191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.848 qpair failed and we were unable to recover it. 00:34:12.848 [2024-07-26 01:16:42.974380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.848 [2024-07-26 01:16:42.974423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.848 qpair failed and we were unable to recover it. 00:34:12.848 [2024-07-26 01:16:42.974564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.848 [2024-07-26 01:16:42.974623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.848 qpair failed and we were unable to recover it. 00:34:12.848 [2024-07-26 01:16:42.974777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.848 [2024-07-26 01:16:42.974824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.848 qpair failed and we were unable to recover it. 00:34:12.849 [2024-07-26 01:16:42.974964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.849 [2024-07-26 01:16:42.974992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.849 qpair failed and we were unable to recover it. 00:34:12.849 [2024-07-26 01:16:42.975151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.849 [2024-07-26 01:16:42.975182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.849 qpair failed and we were unable to recover it. 00:34:12.849 [2024-07-26 01:16:42.975308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.849 [2024-07-26 01:16:42.975337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.849 qpair failed and we were unable to recover it. 00:34:12.849 [2024-07-26 01:16:42.975471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.849 [2024-07-26 01:16:42.975499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.849 qpair failed and we were unable to recover it. 00:34:12.849 [2024-07-26 01:16:42.975686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.849 [2024-07-26 01:16:42.975720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.849 qpair failed and we were unable to recover it. 00:34:12.849 [2024-07-26 01:16:42.975909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.849 [2024-07-26 01:16:42.975938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.849 qpair failed and we were unable to recover it. 00:34:12.849 [2024-07-26 01:16:42.976071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.849 [2024-07-26 01:16:42.976116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.849 qpair failed and we were unable to recover it. 00:34:12.849 [2024-07-26 01:16:42.976276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.849 [2024-07-26 01:16:42.976302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.849 qpair failed and we were unable to recover it. 00:34:12.849 [2024-07-26 01:16:42.976409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.849 [2024-07-26 01:16:42.976435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.849 qpair failed and we were unable to recover it. 00:34:12.849 [2024-07-26 01:16:42.976542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.849 [2024-07-26 01:16:42.976568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.849 qpair failed and we were unable to recover it. 00:34:12.849 [2024-07-26 01:16:42.976760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.849 [2024-07-26 01:16:42.976788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.849 qpair failed and we were unable to recover it. 00:34:12.849 [2024-07-26 01:16:42.976895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.849 [2024-07-26 01:16:42.976924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.849 qpair failed and we were unable to recover it. 00:34:12.849 [2024-07-26 01:16:42.977044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.849 [2024-07-26 01:16:42.977083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.849 qpair failed and we were unable to recover it. 00:34:12.849 [2024-07-26 01:16:42.977237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.849 [2024-07-26 01:16:42.977264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.849 qpair failed and we were unable to recover it. 00:34:12.849 [2024-07-26 01:16:42.977406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.849 [2024-07-26 01:16:42.977434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.849 qpair failed and we were unable to recover it. 00:34:12.849 [2024-07-26 01:16:42.977575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.849 [2024-07-26 01:16:42.977605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.849 qpair failed and we were unable to recover it. 00:34:12.849 [2024-07-26 01:16:42.977716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.849 [2024-07-26 01:16:42.977744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.849 qpair failed and we were unable to recover it. 00:34:12.849 [2024-07-26 01:16:42.977870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.849 [2024-07-26 01:16:42.977899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.849 qpair failed and we were unable to recover it. 00:34:12.849 [2024-07-26 01:16:42.978073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.849 [2024-07-26 01:16:42.978118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.849 qpair failed and we were unable to recover it. 00:34:12.849 [2024-07-26 01:16:42.978286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.849 [2024-07-26 01:16:42.978315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.849 qpair failed and we were unable to recover it. 00:34:12.849 [2024-07-26 01:16:42.978479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.849 [2024-07-26 01:16:42.978526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.849 qpair failed and we were unable to recover it. 00:34:12.849 [2024-07-26 01:16:42.978681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.849 [2024-07-26 01:16:42.978725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.849 qpair failed and we were unable to recover it. 00:34:12.849 [2024-07-26 01:16:42.978881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.849 [2024-07-26 01:16:42.978908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.849 qpair failed and we were unable to recover it. 00:34:12.849 [2024-07-26 01:16:42.979069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.849 [2024-07-26 01:16:42.979098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.849 qpair failed and we were unable to recover it. 00:34:12.849 [2024-07-26 01:16:42.979270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.849 [2024-07-26 01:16:42.979314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.849 qpair failed and we were unable to recover it. 00:34:12.849 [2024-07-26 01:16:42.979526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.849 [2024-07-26 01:16:42.979587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.849 qpair failed and we were unable to recover it. 00:34:12.849 [2024-07-26 01:16:42.979724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.849 [2024-07-26 01:16:42.979769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.849 qpair failed and we were unable to recover it. 00:34:12.849 [2024-07-26 01:16:42.979903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.849 [2024-07-26 01:16:42.979929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.849 qpair failed and we were unable to recover it. 00:34:12.849 [2024-07-26 01:16:42.980073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.849 [2024-07-26 01:16:42.980101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.849 qpair failed and we were unable to recover it. 00:34:12.849 [2024-07-26 01:16:42.980230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.849 [2024-07-26 01:16:42.980260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.849 qpair failed and we were unable to recover it. 00:34:12.849 [2024-07-26 01:16:42.980407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.849 [2024-07-26 01:16:42.980436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.849 qpair failed and we were unable to recover it. 00:34:12.849 [2024-07-26 01:16:42.980586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.849 [2024-07-26 01:16:42.980615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.849 qpair failed and we were unable to recover it. 00:34:12.849 [2024-07-26 01:16:42.980756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.849 [2024-07-26 01:16:42.980785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.849 qpair failed and we were unable to recover it. 00:34:12.849 [2024-07-26 01:16:42.980929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.849 [2024-07-26 01:16:42.980958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.849 qpair failed and we were unable to recover it. 00:34:12.849 [2024-07-26 01:16:42.981114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.849 [2024-07-26 01:16:42.981141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.849 qpair failed and we were unable to recover it. 00:34:12.849 [2024-07-26 01:16:42.981273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.849 [2024-07-26 01:16:42.981299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.849 qpair failed and we were unable to recover it. 00:34:12.849 [2024-07-26 01:16:42.981419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.849 [2024-07-26 01:16:42.981448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.849 qpair failed and we were unable to recover it. 00:34:12.849 [2024-07-26 01:16:42.981589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.850 [2024-07-26 01:16:42.981637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.850 qpair failed and we were unable to recover it. 00:34:12.850 [2024-07-26 01:16:42.981782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.850 [2024-07-26 01:16:42.981811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.850 qpair failed and we were unable to recover it. 00:34:12.850 [2024-07-26 01:16:42.981958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.850 [2024-07-26 01:16:42.981987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.850 qpair failed and we were unable to recover it. 00:34:12.850 [2024-07-26 01:16:42.982147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.850 [2024-07-26 01:16:42.982174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.850 qpair failed and we were unable to recover it. 00:34:12.850 [2024-07-26 01:16:42.982285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.850 [2024-07-26 01:16:42.982311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.850 qpair failed and we were unable to recover it. 00:34:12.850 [2024-07-26 01:16:42.982450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.850 [2024-07-26 01:16:42.982476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.850 qpair failed and we were unable to recover it. 00:34:12.850 [2024-07-26 01:16:42.982681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.850 [2024-07-26 01:16:42.982729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.850 qpair failed and we were unable to recover it. 00:34:12.850 [2024-07-26 01:16:42.982911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.850 [2024-07-26 01:16:42.982940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.850 qpair failed and we were unable to recover it. 00:34:12.850 [2024-07-26 01:16:42.983090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.850 [2024-07-26 01:16:42.983132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.850 qpair failed and we were unable to recover it. 00:34:12.850 [2024-07-26 01:16:42.983243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.850 [2024-07-26 01:16:42.983269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.850 qpair failed and we were unable to recover it. 00:34:12.850 [2024-07-26 01:16:42.983400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.850 [2024-07-26 01:16:42.983441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.850 qpair failed and we were unable to recover it. 00:34:12.850 [2024-07-26 01:16:42.983613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.850 [2024-07-26 01:16:42.983641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.850 qpair failed and we were unable to recover it. 00:34:12.850 [2024-07-26 01:16:42.983811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.850 [2024-07-26 01:16:42.983840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.850 qpair failed and we were unable to recover it. 00:34:12.850 [2024-07-26 01:16:42.983959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.850 [2024-07-26 01:16:42.983991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.850 qpair failed and we were unable to recover it. 00:34:12.850 [2024-07-26 01:16:42.984123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.850 [2024-07-26 01:16:42.984151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.850 qpair failed and we were unable to recover it. 00:34:12.850 [2024-07-26 01:16:42.984289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.850 [2024-07-26 01:16:42.984314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.850 qpair failed and we were unable to recover it. 00:34:12.850 [2024-07-26 01:16:42.984471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.850 [2024-07-26 01:16:42.984500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.850 qpair failed and we were unable to recover it. 00:34:12.850 [2024-07-26 01:16:42.984680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.850 [2024-07-26 01:16:42.984708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.850 qpair failed and we were unable to recover it. 00:34:12.850 [2024-07-26 01:16:42.984890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.850 [2024-07-26 01:16:42.984918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.850 qpair failed and we were unable to recover it. 00:34:12.850 [2024-07-26 01:16:42.985104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.850 [2024-07-26 01:16:42.985131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.850 qpair failed and we were unable to recover it. 00:34:12.850 [2024-07-26 01:16:42.985292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.850 [2024-07-26 01:16:42.985318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.850 qpair failed and we were unable to recover it. 00:34:12.850 [2024-07-26 01:16:42.985529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.850 [2024-07-26 01:16:42.985591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.850 qpair failed and we were unable to recover it. 00:34:12.850 [2024-07-26 01:16:42.985769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.850 [2024-07-26 01:16:42.985798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.850 qpair failed and we were unable to recover it. 00:34:12.850 [2024-07-26 01:16:42.985944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.850 [2024-07-26 01:16:42.985973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.850 qpair failed and we were unable to recover it. 00:34:12.850 [2024-07-26 01:16:42.986129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.850 [2024-07-26 01:16:42.986156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.850 qpair failed and we were unable to recover it. 00:34:12.850 [2024-07-26 01:16:42.986289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.850 [2024-07-26 01:16:42.986315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.850 qpair failed and we were unable to recover it. 00:34:12.850 [2024-07-26 01:16:42.986501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.850 [2024-07-26 01:16:42.986529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.850 qpair failed and we were unable to recover it. 00:34:12.850 [2024-07-26 01:16:42.986794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.850 [2024-07-26 01:16:42.986846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.850 qpair failed and we were unable to recover it. 00:34:12.850 [2024-07-26 01:16:42.986995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.850 [2024-07-26 01:16:42.987024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.850 qpair failed and we were unable to recover it. 00:34:12.850 [2024-07-26 01:16:42.987176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.850 [2024-07-26 01:16:42.987202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.850 qpair failed and we were unable to recover it. 00:34:12.850 [2024-07-26 01:16:42.987361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.850 [2024-07-26 01:16:42.987387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.850 qpair failed and we were unable to recover it. 00:34:12.850 [2024-07-26 01:16:42.987549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.850 [2024-07-26 01:16:42.987592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.850 qpair failed and we were unable to recover it. 00:34:12.850 [2024-07-26 01:16:42.987708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.850 [2024-07-26 01:16:42.987737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.850 qpair failed and we were unable to recover it. 00:34:12.850 [2024-07-26 01:16:42.987917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.850 [2024-07-26 01:16:42.987946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.850 qpair failed and we were unable to recover it. 00:34:12.850 [2024-07-26 01:16:42.988112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.850 [2024-07-26 01:16:42.988139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.850 qpair failed and we were unable to recover it. 00:34:12.850 [2024-07-26 01:16:42.988266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.850 [2024-07-26 01:16:42.988292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.850 qpair failed and we were unable to recover it. 00:34:12.850 [2024-07-26 01:16:42.988464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.850 [2024-07-26 01:16:42.988491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.850 qpair failed and we were unable to recover it. 00:34:12.850 [2024-07-26 01:16:42.988673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.850 [2024-07-26 01:16:42.988702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.850 qpair failed and we were unable to recover it. 00:34:12.851 [2024-07-26 01:16:42.988812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.851 [2024-07-26 01:16:42.988841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.851 qpair failed and we were unable to recover it. 00:34:12.851 [2024-07-26 01:16:42.989003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.851 [2024-07-26 01:16:42.989030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.851 qpair failed and we were unable to recover it. 00:34:12.851 [2024-07-26 01:16:42.989145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.851 [2024-07-26 01:16:42.989175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.851 qpair failed and we were unable to recover it. 00:34:12.851 [2024-07-26 01:16:42.989336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.851 [2024-07-26 01:16:42.989362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.851 qpair failed and we were unable to recover it. 00:34:12.851 [2024-07-26 01:16:42.989540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.851 [2024-07-26 01:16:42.989569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.851 qpair failed and we were unable to recover it. 00:34:12.851 [2024-07-26 01:16:42.989718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.851 [2024-07-26 01:16:42.989746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.851 qpair failed and we were unable to recover it. 00:34:12.851 [2024-07-26 01:16:42.989891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.851 [2024-07-26 01:16:42.989921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.851 qpair failed and we were unable to recover it. 00:34:12.851 [2024-07-26 01:16:42.990051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.851 [2024-07-26 01:16:42.990095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.851 qpair failed and we were unable to recover it. 00:34:12.851 [2024-07-26 01:16:42.990207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.851 [2024-07-26 01:16:42.990233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.851 qpair failed and we were unable to recover it. 00:34:12.851 [2024-07-26 01:16:42.990368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.851 [2024-07-26 01:16:42.990397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.851 qpair failed and we were unable to recover it. 00:34:12.851 [2024-07-26 01:16:42.990515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.851 [2024-07-26 01:16:42.990545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.851 qpair failed and we were unable to recover it. 00:34:12.851 [2024-07-26 01:16:42.990695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.851 [2024-07-26 01:16:42.990724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.851 qpair failed and we were unable to recover it. 00:34:12.851 [2024-07-26 01:16:42.990871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.851 [2024-07-26 01:16:42.990900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.851 qpair failed and we were unable to recover it. 00:34:12.851 [2024-07-26 01:16:42.991041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.851 [2024-07-26 01:16:42.991091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.851 qpair failed and we were unable to recover it. 00:34:12.851 [2024-07-26 01:16:42.991250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.851 [2024-07-26 01:16:42.991278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.851 qpair failed and we were unable to recover it. 00:34:12.851 [2024-07-26 01:16:42.991438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.851 [2024-07-26 01:16:42.991465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.851 qpair failed and we were unable to recover it. 00:34:12.851 [2024-07-26 01:16:42.991620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.851 [2024-07-26 01:16:42.991669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.851 qpair failed and we were unable to recover it. 00:34:12.851 [2024-07-26 01:16:42.991825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.851 [2024-07-26 01:16:42.991870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.851 qpair failed and we were unable to recover it. 00:34:12.851 [2024-07-26 01:16:42.992004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.851 [2024-07-26 01:16:42.992031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.851 qpair failed and we were unable to recover it. 00:34:12.851 [2024-07-26 01:16:42.992200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.851 [2024-07-26 01:16:42.992225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.851 qpair failed and we were unable to recover it. 00:34:12.851 [2024-07-26 01:16:42.992352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.851 [2024-07-26 01:16:42.992378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.851 qpair failed and we were unable to recover it. 00:34:12.851 [2024-07-26 01:16:42.992513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.851 [2024-07-26 01:16:42.992541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.851 qpair failed and we were unable to recover it. 00:34:12.851 [2024-07-26 01:16:42.992654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.851 [2024-07-26 01:16:42.992680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.851 qpair failed and we were unable to recover it. 00:34:12.851 [2024-07-26 01:16:42.992807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.851 [2024-07-26 01:16:42.992834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.851 qpair failed and we were unable to recover it. 00:34:12.851 [2024-07-26 01:16:42.992968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.851 [2024-07-26 01:16:42.992995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.851 qpair failed and we were unable to recover it. 00:34:12.851 [2024-07-26 01:16:42.993108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.851 [2024-07-26 01:16:42.993132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.851 qpair failed and we were unable to recover it. 00:34:12.851 [2024-07-26 01:16:42.993235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.851 [2024-07-26 01:16:42.993261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.851 qpair failed and we were unable to recover it. 00:34:12.851 [2024-07-26 01:16:42.993403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.851 [2024-07-26 01:16:42.993430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.851 qpair failed and we were unable to recover it. 00:34:12.851 [2024-07-26 01:16:42.993567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.851 [2024-07-26 01:16:42.993595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.851 qpair failed and we were unable to recover it. 00:34:12.851 [2024-07-26 01:16:42.993754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.851 [2024-07-26 01:16:42.993785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.851 qpair failed and we were unable to recover it. 00:34:12.851 [2024-07-26 01:16:42.993921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.851 [2024-07-26 01:16:42.993947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.851 qpair failed and we were unable to recover it. 00:34:12.851 [2024-07-26 01:16:42.994126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.851 [2024-07-26 01:16:42.994171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.851 qpair failed and we were unable to recover it. 00:34:12.851 [2024-07-26 01:16:42.994331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.851 [2024-07-26 01:16:42.994364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.851 qpair failed and we were unable to recover it. 00:34:12.851 [2024-07-26 01:16:42.994551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.851 [2024-07-26 01:16:42.994578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.851 qpair failed and we were unable to recover it. 00:34:12.851 [2024-07-26 01:16:42.994716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.851 [2024-07-26 01:16:42.994742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.851 qpair failed and we were unable to recover it. 00:34:12.851 [2024-07-26 01:16:42.994879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.851 [2024-07-26 01:16:42.994907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.851 qpair failed and we were unable to recover it. 00:34:12.851 [2024-07-26 01:16:42.995018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.851 [2024-07-26 01:16:42.995043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.851 qpair failed and we were unable to recover it. 00:34:12.851 [2024-07-26 01:16:42.995278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.852 [2024-07-26 01:16:42.995323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.852 qpair failed and we were unable to recover it. 00:34:12.852 [2024-07-26 01:16:42.995508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.852 [2024-07-26 01:16:42.995552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.852 qpair failed and we were unable to recover it. 00:34:12.852 [2024-07-26 01:16:42.995684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.852 [2024-07-26 01:16:42.995715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.852 qpair failed and we were unable to recover it. 00:34:12.852 [2024-07-26 01:16:42.995876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.852 [2024-07-26 01:16:42.995903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.852 qpair failed and we were unable to recover it. 00:34:12.852 [2024-07-26 01:16:42.996065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.852 [2024-07-26 01:16:42.996092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.852 qpair failed and we were unable to recover it. 00:34:12.852 [2024-07-26 01:16:42.996247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.852 [2024-07-26 01:16:42.996291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.852 qpair failed and we were unable to recover it. 00:34:12.852 [2024-07-26 01:16:42.996547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.852 [2024-07-26 01:16:42.996602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.852 qpair failed and we were unable to recover it. 00:34:12.852 [2024-07-26 01:16:42.996773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.852 [2024-07-26 01:16:42.996818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.852 qpair failed and we were unable to recover it. 00:34:12.852 [2024-07-26 01:16:42.996934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.852 [2024-07-26 01:16:42.996961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.852 qpair failed and we were unable to recover it. 00:34:12.852 [2024-07-26 01:16:42.997094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.852 [2024-07-26 01:16:42.997120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.852 qpair failed and we were unable to recover it. 00:34:12.852 [2024-07-26 01:16:42.997254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.852 [2024-07-26 01:16:42.997300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.852 qpair failed and we were unable to recover it. 00:34:12.852 [2024-07-26 01:16:42.997449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.852 [2024-07-26 01:16:42.997479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.852 qpair failed and we were unable to recover it. 00:34:12.852 [2024-07-26 01:16:42.997678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.852 [2024-07-26 01:16:42.997708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.852 qpair failed and we were unable to recover it. 00:34:12.852 [2024-07-26 01:16:42.997861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.852 [2024-07-26 01:16:42.997888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.852 qpair failed and we were unable to recover it. 00:34:12.852 [2024-07-26 01:16:42.998022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.852 [2024-07-26 01:16:42.998051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.852 qpair failed and we were unable to recover it. 00:34:12.852 [2024-07-26 01:16:42.998221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.852 [2024-07-26 01:16:42.998265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.852 qpair failed and we were unable to recover it. 00:34:12.852 [2024-07-26 01:16:42.998452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.852 [2024-07-26 01:16:42.998494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.852 qpair failed and we were unable to recover it. 00:34:12.852 [2024-07-26 01:16:42.998654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.852 [2024-07-26 01:16:42.998697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.852 qpair failed and we were unable to recover it. 00:34:12.852 [2024-07-26 01:16:42.998832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.852 [2024-07-26 01:16:42.998858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.852 qpair failed and we were unable to recover it. 00:34:12.852 [2024-07-26 01:16:42.998995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.852 [2024-07-26 01:16:42.999021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.852 qpair failed and we were unable to recover it. 00:34:12.852 [2024-07-26 01:16:42.999197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.852 [2024-07-26 01:16:42.999242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.852 qpair failed and we were unable to recover it. 00:34:12.852 [2024-07-26 01:16:42.999374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.852 [2024-07-26 01:16:42.999417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.852 qpair failed and we were unable to recover it. 00:34:12.852 [2024-07-26 01:16:42.999577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.852 [2024-07-26 01:16:42.999619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.852 qpair failed and we were unable to recover it. 00:34:12.852 [2024-07-26 01:16:42.999754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.852 [2024-07-26 01:16:42.999780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.852 qpair failed and we were unable to recover it. 00:34:12.852 [2024-07-26 01:16:42.999921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.852 [2024-07-26 01:16:42.999948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.852 qpair failed and we were unable to recover it. 00:34:12.852 [2024-07-26 01:16:43.000081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.852 [2024-07-26 01:16:43.000108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.852 qpair failed and we were unable to recover it. 00:34:12.852 [2024-07-26 01:16:43.000292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.852 [2024-07-26 01:16:43.000336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.852 qpair failed and we were unable to recover it. 00:34:12.852 [2024-07-26 01:16:43.000493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.852 [2024-07-26 01:16:43.000538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.852 qpair failed and we were unable to recover it. 00:34:12.852 [2024-07-26 01:16:43.000689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.852 [2024-07-26 01:16:43.000717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.852 qpair failed and we were unable to recover it. 00:34:12.852 [2024-07-26 01:16:43.000880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.852 [2024-07-26 01:16:43.000907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.852 qpair failed and we were unable to recover it. 00:34:12.853 [2024-07-26 01:16:43.001047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.853 [2024-07-26 01:16:43.001080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.853 qpair failed and we were unable to recover it. 00:34:12.853 [2024-07-26 01:16:43.001244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.853 [2024-07-26 01:16:43.001288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.853 qpair failed and we were unable to recover it. 00:34:12.853 [2024-07-26 01:16:43.001455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.853 [2024-07-26 01:16:43.001506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.853 qpair failed and we were unable to recover it. 00:34:12.853 [2024-07-26 01:16:43.001710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.853 [2024-07-26 01:16:43.001767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.853 qpair failed and we were unable to recover it. 00:34:12.853 [2024-07-26 01:16:43.001905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.853 [2024-07-26 01:16:43.001931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.853 qpair failed and we were unable to recover it. 00:34:12.853 [2024-07-26 01:16:43.002069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.853 [2024-07-26 01:16:43.002096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.853 qpair failed and we were unable to recover it. 00:34:12.853 [2024-07-26 01:16:43.002279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.853 [2024-07-26 01:16:43.002324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.853 qpair failed and we were unable to recover it. 00:34:12.853 [2024-07-26 01:16:43.002454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.853 [2024-07-26 01:16:43.002499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.853 qpair failed and we were unable to recover it. 00:34:12.853 [2024-07-26 01:16:43.002661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.853 [2024-07-26 01:16:43.002704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.853 qpair failed and we were unable to recover it. 00:34:12.853 [2024-07-26 01:16:43.002842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.853 [2024-07-26 01:16:43.002879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.853 qpair failed and we were unable to recover it. 00:34:12.853 [2024-07-26 01:16:43.003018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.853 [2024-07-26 01:16:43.003046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.853 qpair failed and we were unable to recover it. 00:34:12.853 [2024-07-26 01:16:43.003263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.853 [2024-07-26 01:16:43.003309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.853 qpair failed and we were unable to recover it. 00:34:12.853 [2024-07-26 01:16:43.003502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.853 [2024-07-26 01:16:43.003546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.853 qpair failed and we were unable to recover it. 00:34:12.853 [2024-07-26 01:16:43.003705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.853 [2024-07-26 01:16:43.003736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.853 qpair failed and we were unable to recover it. 00:34:12.853 [2024-07-26 01:16:43.003888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.853 [2024-07-26 01:16:43.003929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.853 qpair failed and we were unable to recover it. 00:34:12.853 [2024-07-26 01:16:43.004081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.853 [2024-07-26 01:16:43.004125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.853 qpair failed and we were unable to recover it. 00:34:12.853 [2024-07-26 01:16:43.004293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.853 [2024-07-26 01:16:43.004324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.853 qpair failed and we were unable to recover it. 00:34:12.853 [2024-07-26 01:16:43.004513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.853 [2024-07-26 01:16:43.004543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.853 qpair failed and we were unable to recover it. 00:34:12.853 [2024-07-26 01:16:43.004691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.853 [2024-07-26 01:16:43.004721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.853 qpair failed and we were unable to recover it. 00:34:12.853 [2024-07-26 01:16:43.004868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.853 [2024-07-26 01:16:43.004910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.853 qpair failed and we were unable to recover it. 00:34:12.853 [2024-07-26 01:16:43.005043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.853 [2024-07-26 01:16:43.005074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.853 qpair failed and we were unable to recover it. 00:34:12.853 [2024-07-26 01:16:43.005218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.853 [2024-07-26 01:16:43.005244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.853 qpair failed and we were unable to recover it. 00:34:12.853 [2024-07-26 01:16:43.005359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.853 [2024-07-26 01:16:43.005414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.853 qpair failed and we were unable to recover it. 00:34:12.853 [2024-07-26 01:16:43.005552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.853 [2024-07-26 01:16:43.005581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.853 qpair failed and we were unable to recover it. 00:34:12.853 [2024-07-26 01:16:43.005798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.853 [2024-07-26 01:16:43.005827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.853 qpair failed and we were unable to recover it. 00:34:12.853 [2024-07-26 01:16:43.005977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.853 [2024-07-26 01:16:43.006010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.853 qpair failed and we were unable to recover it. 00:34:12.853 [2024-07-26 01:16:43.006160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.853 [2024-07-26 01:16:43.006188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.853 qpair failed and we were unable to recover it. 00:34:12.853 [2024-07-26 01:16:43.006360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.853 [2024-07-26 01:16:43.006386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.853 qpair failed and we were unable to recover it. 00:34:12.853 [2024-07-26 01:16:43.006542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.853 [2024-07-26 01:16:43.006582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.853 qpair failed and we were unable to recover it. 00:34:12.853 [2024-07-26 01:16:43.006727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.853 [2024-07-26 01:16:43.006772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.853 qpair failed and we were unable to recover it. 00:34:12.853 [2024-07-26 01:16:43.006920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.853 [2024-07-26 01:16:43.006949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.853 qpair failed and we were unable to recover it. 00:34:12.853 [2024-07-26 01:16:43.007072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.853 [2024-07-26 01:16:43.007125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.853 qpair failed and we were unable to recover it. 00:34:12.853 [2024-07-26 01:16:43.007255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.853 [2024-07-26 01:16:43.007280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.853 qpair failed and we were unable to recover it. 00:34:12.853 [2024-07-26 01:16:43.007400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.853 [2024-07-26 01:16:43.007426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.853 qpair failed and we were unable to recover it. 00:34:12.853 [2024-07-26 01:16:43.007533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.853 [2024-07-26 01:16:43.007560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.853 qpair failed and we were unable to recover it. 00:34:12.853 [2024-07-26 01:16:43.007709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.853 [2024-07-26 01:16:43.007738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.853 qpair failed and we were unable to recover it. 00:34:12.853 [2024-07-26 01:16:43.007886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.853 [2024-07-26 01:16:43.007915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.853 qpair failed and we were unable to recover it. 00:34:12.854 [2024-07-26 01:16:43.008114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.854 [2024-07-26 01:16:43.008141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.854 qpair failed and we were unable to recover it. 00:34:12.854 [2024-07-26 01:16:43.008271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.854 [2024-07-26 01:16:43.008297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.854 qpair failed and we were unable to recover it. 00:34:12.854 [2024-07-26 01:16:43.008434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.854 [2024-07-26 01:16:43.008461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.854 qpair failed and we were unable to recover it. 00:34:12.854 [2024-07-26 01:16:43.008594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.854 [2024-07-26 01:16:43.008620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.854 qpair failed and we were unable to recover it. 00:34:12.854 [2024-07-26 01:16:43.008789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.854 [2024-07-26 01:16:43.008818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.854 qpair failed and we were unable to recover it. 00:34:12.854 [2024-07-26 01:16:43.009023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.854 [2024-07-26 01:16:43.009052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.854 qpair failed and we were unable to recover it. 00:34:12.854 [2024-07-26 01:16:43.009226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.854 [2024-07-26 01:16:43.009255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.854 qpair failed and we were unable to recover it. 00:34:12.854 [2024-07-26 01:16:43.009384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.854 [2024-07-26 01:16:43.009412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.854 qpair failed and we were unable to recover it. 00:34:12.854 [2024-07-26 01:16:43.009573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.854 [2024-07-26 01:16:43.009604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.854 qpair failed and we were unable to recover it. 00:34:12.854 [2024-07-26 01:16:43.009756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.854 [2024-07-26 01:16:43.009785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.854 qpair failed and we were unable to recover it. 00:34:12.854 [2024-07-26 01:16:43.009957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.854 [2024-07-26 01:16:43.009986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.854 qpair failed and we were unable to recover it. 00:34:12.854 [2024-07-26 01:16:43.010142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.854 [2024-07-26 01:16:43.010170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.854 qpair failed and we were unable to recover it. 00:34:12.854 [2024-07-26 01:16:43.010272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.854 [2024-07-26 01:16:43.010298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.854 qpair failed and we were unable to recover it. 00:34:12.854 [2024-07-26 01:16:43.010455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.854 [2024-07-26 01:16:43.010481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.854 qpair failed and we were unable to recover it. 00:34:12.854 [2024-07-26 01:16:43.010709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.854 [2024-07-26 01:16:43.010759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.854 qpair failed and we were unable to recover it. 00:34:12.854 [2024-07-26 01:16:43.010925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.854 [2024-07-26 01:16:43.010951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.854 qpair failed and we were unable to recover it. 00:34:12.854 [2024-07-26 01:16:43.011088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.854 [2024-07-26 01:16:43.011125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.854 qpair failed and we were unable to recover it. 00:34:12.854 [2024-07-26 01:16:43.011261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.854 [2024-07-26 01:16:43.011287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.854 qpair failed and we were unable to recover it. 00:34:12.854 [2024-07-26 01:16:43.011459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.854 [2024-07-26 01:16:43.011488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.854 qpair failed and we were unable to recover it. 00:34:12.854 [2024-07-26 01:16:43.011635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.854 [2024-07-26 01:16:43.011666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.854 qpair failed and we were unable to recover it. 00:34:12.854 [2024-07-26 01:16:43.011787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.854 [2024-07-26 01:16:43.011817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.854 qpair failed and we were unable to recover it. 00:34:12.854 [2024-07-26 01:16:43.011991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.854 [2024-07-26 01:16:43.012020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.854 qpair failed and we were unable to recover it. 00:34:12.854 [2024-07-26 01:16:43.012186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.854 [2024-07-26 01:16:43.012213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.854 qpair failed and we were unable to recover it. 00:34:12.854 [2024-07-26 01:16:43.012349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.854 [2024-07-26 01:16:43.012375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.854 qpair failed and we were unable to recover it. 00:34:12.854 [2024-07-26 01:16:43.012505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.854 [2024-07-26 01:16:43.012532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.854 qpair failed and we were unable to recover it. 00:34:12.854 [2024-07-26 01:16:43.012665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.854 [2024-07-26 01:16:43.012693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.854 qpair failed and we were unable to recover it. 00:34:12.854 [2024-07-26 01:16:43.012845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.854 [2024-07-26 01:16:43.012875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.854 qpair failed and we were unable to recover it. 00:34:12.854 [2024-07-26 01:16:43.013012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.854 [2024-07-26 01:16:43.013041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.854 qpair failed and we were unable to recover it. 00:34:12.854 [2024-07-26 01:16:43.013206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.854 [2024-07-26 01:16:43.013231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.854 qpair failed and we were unable to recover it. 00:34:12.854 [2024-07-26 01:16:43.013380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.854 [2024-07-26 01:16:43.013421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.854 qpair failed and we were unable to recover it. 00:34:12.854 [2024-07-26 01:16:43.013553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.854 [2024-07-26 01:16:43.013599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.854 qpair failed and we were unable to recover it. 00:34:12.854 [2024-07-26 01:16:43.013785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.854 [2024-07-26 01:16:43.013816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.854 qpair failed and we were unable to recover it. 00:34:12.854 [2024-07-26 01:16:43.013949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.854 [2024-07-26 01:16:43.013977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.854 qpair failed and we were unable to recover it. 00:34:12.854 [2024-07-26 01:16:43.014158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.854 [2024-07-26 01:16:43.014209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.854 qpair failed and we were unable to recover it. 00:34:12.854 [2024-07-26 01:16:43.014358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.854 [2024-07-26 01:16:43.014385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.854 qpair failed and we were unable to recover it. 00:34:12.854 [2024-07-26 01:16:43.014513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.854 [2024-07-26 01:16:43.014549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.854 qpair failed and we were unable to recover it. 00:34:12.854 [2024-07-26 01:16:43.014683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.854 [2024-07-26 01:16:43.014727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.854 qpair failed and we were unable to recover it. 00:34:12.854 [2024-07-26 01:16:43.014829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.854 [2024-07-26 01:16:43.014856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.855 qpair failed and we were unable to recover it. 00:34:12.855 [2024-07-26 01:16:43.015017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.855 [2024-07-26 01:16:43.015045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.855 qpair failed and we were unable to recover it. 00:34:12.855 [2024-07-26 01:16:43.015228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.855 [2024-07-26 01:16:43.015255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.855 qpair failed and we were unable to recover it. 00:34:12.855 [2024-07-26 01:16:43.015389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.855 [2024-07-26 01:16:43.015423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.855 qpair failed and we were unable to recover it. 00:34:12.855 [2024-07-26 01:16:43.015555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.855 [2024-07-26 01:16:43.015589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.855 qpair failed and we were unable to recover it. 00:34:12.855 [2024-07-26 01:16:43.015735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.855 [2024-07-26 01:16:43.015774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.855 qpair failed and we were unable to recover it. 00:34:12.855 [2024-07-26 01:16:43.015887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.855 [2024-07-26 01:16:43.015914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.855 qpair failed and we were unable to recover it. 00:34:12.855 [2024-07-26 01:16:43.016070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.855 [2024-07-26 01:16:43.016108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.855 qpair failed and we were unable to recover it. 00:34:12.855 [2024-07-26 01:16:43.016215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.855 [2024-07-26 01:16:43.016242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.855 qpair failed and we were unable to recover it. 00:34:12.855 [2024-07-26 01:16:43.016424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.855 [2024-07-26 01:16:43.016453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.855 qpair failed and we were unable to recover it. 00:34:12.855 [2024-07-26 01:16:43.016635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.855 [2024-07-26 01:16:43.016664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.855 qpair failed and we were unable to recover it. 00:34:12.855 [2024-07-26 01:16:43.016841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.855 [2024-07-26 01:16:43.016870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.855 qpair failed and we were unable to recover it. 00:34:12.855 [2024-07-26 01:16:43.017019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.855 [2024-07-26 01:16:43.017048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.855 qpair failed and we were unable to recover it. 00:34:12.855 [2024-07-26 01:16:43.017205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.855 [2024-07-26 01:16:43.017231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.855 qpair failed and we were unable to recover it. 00:34:12.855 [2024-07-26 01:16:43.017335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.855 [2024-07-26 01:16:43.017378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.855 qpair failed and we were unable to recover it. 00:34:12.855 [2024-07-26 01:16:43.017555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.855 [2024-07-26 01:16:43.017583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.855 qpair failed and we were unable to recover it. 00:34:12.855 [2024-07-26 01:16:43.017724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.855 [2024-07-26 01:16:43.017753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.855 qpair failed and we were unable to recover it. 00:34:12.855 [2024-07-26 01:16:43.017860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.855 [2024-07-26 01:16:43.017889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.855 qpair failed and we were unable to recover it. 00:34:12.855 [2024-07-26 01:16:43.018041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.855 [2024-07-26 01:16:43.018078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.855 qpair failed and we were unable to recover it. 00:34:12.855 [2024-07-26 01:16:43.018257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.855 [2024-07-26 01:16:43.018283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:12.855 qpair failed and we were unable to recover it. 00:34:12.855 [2024-07-26 01:16:43.018447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.855 [2024-07-26 01:16:43.018492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.855 qpair failed and we were unable to recover it. 00:34:12.855 [2024-07-26 01:16:43.018677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.855 [2024-07-26 01:16:43.018710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.855 qpair failed and we were unable to recover it. 00:34:12.855 [2024-07-26 01:16:43.018880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.855 [2024-07-26 01:16:43.018910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.855 qpair failed and we were unable to recover it. 00:34:12.855 [2024-07-26 01:16:43.019037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.855 [2024-07-26 01:16:43.019095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.855 qpair failed and we were unable to recover it. 00:34:12.855 [2024-07-26 01:16:43.019238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.855 [2024-07-26 01:16:43.019265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.855 qpair failed and we were unable to recover it. 00:34:12.855 [2024-07-26 01:16:43.019425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.855 [2024-07-26 01:16:43.019452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.855 qpair failed and we were unable to recover it. 00:34:12.855 [2024-07-26 01:16:43.019608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.855 [2024-07-26 01:16:43.019639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.855 qpair failed and we were unable to recover it. 00:34:12.855 [2024-07-26 01:16:43.019834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.855 [2024-07-26 01:16:43.019877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.855 qpair failed and we were unable to recover it. 00:34:12.855 [2024-07-26 01:16:43.020054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.855 [2024-07-26 01:16:43.020090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.855 qpair failed and we were unable to recover it. 00:34:12.855 [2024-07-26 01:16:43.020220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.855 [2024-07-26 01:16:43.020248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.855 qpair failed and we were unable to recover it. 00:34:12.855 [2024-07-26 01:16:43.020411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.855 [2024-07-26 01:16:43.020438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.855 qpair failed and we were unable to recover it. 00:34:12.855 [2024-07-26 01:16:43.020616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.855 [2024-07-26 01:16:43.020646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.855 qpair failed and we were unable to recover it. 00:34:12.855 [2024-07-26 01:16:43.020774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.855 [2024-07-26 01:16:43.020801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.855 qpair failed and we were unable to recover it. 00:34:12.855 [2024-07-26 01:16:43.020931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.855 [2024-07-26 01:16:43.020961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.855 qpair failed and we were unable to recover it. 00:34:12.855 [2024-07-26 01:16:43.021118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.855 [2024-07-26 01:16:43.021146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.855 qpair failed and we were unable to recover it. 00:34:12.855 [2024-07-26 01:16:43.021306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.855 [2024-07-26 01:16:43.021352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.855 qpair failed and we were unable to recover it. 00:34:12.855 [2024-07-26 01:16:43.021498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.855 [2024-07-26 01:16:43.021528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.855 qpair failed and we were unable to recover it. 00:34:12.855 [2024-07-26 01:16:43.021735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.856 [2024-07-26 01:16:43.021765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.856 qpair failed and we were unable to recover it. 00:34:12.856 [2024-07-26 01:16:43.021910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.856 [2024-07-26 01:16:43.021941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.856 qpair failed and we were unable to recover it. 00:34:12.856 [2024-07-26 01:16:43.022111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.856 [2024-07-26 01:16:43.022138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.856 qpair failed and we were unable to recover it. 00:34:12.856 [2024-07-26 01:16:43.022247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.856 [2024-07-26 01:16:43.022274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.856 qpair failed and we were unable to recover it. 00:34:12.856 [2024-07-26 01:16:43.022431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.856 [2024-07-26 01:16:43.022462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.856 qpair failed and we were unable to recover it. 00:34:12.856 [2024-07-26 01:16:43.022630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.856 [2024-07-26 01:16:43.022660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.856 qpair failed and we were unable to recover it. 00:34:12.856 [2024-07-26 01:16:43.022805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.856 [2024-07-26 01:16:43.022835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.856 qpair failed and we were unable to recover it. 00:34:12.856 [2024-07-26 01:16:43.023009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.856 [2024-07-26 01:16:43.023039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.856 qpair failed and we were unable to recover it. 00:34:12.856 [2024-07-26 01:16:43.023209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.856 [2024-07-26 01:16:43.023236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.856 qpair failed and we were unable to recover it. 00:34:12.856 [2024-07-26 01:16:43.023395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.856 [2024-07-26 01:16:43.023422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.856 qpair failed and we were unable to recover it. 00:34:12.856 [2024-07-26 01:16:43.023572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.856 [2024-07-26 01:16:43.023602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.856 qpair failed and we were unable to recover it. 00:34:12.856 [2024-07-26 01:16:43.023803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.856 [2024-07-26 01:16:43.023833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.856 qpair failed and we were unable to recover it. 00:34:12.856 [2024-07-26 01:16:43.024018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.856 [2024-07-26 01:16:43.024045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.856 qpair failed and we were unable to recover it. 00:34:12.856 [2024-07-26 01:16:43.024165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.856 [2024-07-26 01:16:43.024192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.856 qpair failed and we were unable to recover it. 00:34:12.856 [2024-07-26 01:16:43.024299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.856 [2024-07-26 01:16:43.024325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.856 qpair failed and we were unable to recover it. 00:34:12.856 [2024-07-26 01:16:43.024479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.856 [2024-07-26 01:16:43.024509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.856 qpair failed and we were unable to recover it. 00:34:12.856 [2024-07-26 01:16:43.024656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.856 [2024-07-26 01:16:43.024686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.856 qpair failed and we were unable to recover it. 00:34:12.856 [2024-07-26 01:16:43.024836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.856 [2024-07-26 01:16:43.024865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.856 qpair failed and we were unable to recover it. 00:34:12.856 [2024-07-26 01:16:43.025034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.856 [2024-07-26 01:16:43.025083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.856 qpair failed and we were unable to recover it. 00:34:12.856 [2024-07-26 01:16:43.025233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.856 [2024-07-26 01:16:43.025261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.856 qpair failed and we were unable to recover it. 00:34:12.856 [2024-07-26 01:16:43.025415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.856 [2024-07-26 01:16:43.025463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.856 qpair failed and we were unable to recover it. 00:34:12.856 [2024-07-26 01:16:43.025649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.856 [2024-07-26 01:16:43.025694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.856 qpair failed and we were unable to recover it. 00:34:12.856 [2024-07-26 01:16:43.025934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.856 [2024-07-26 01:16:43.025988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.856 qpair failed and we were unable to recover it. 00:34:12.856 [2024-07-26 01:16:43.026133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.856 [2024-07-26 01:16:43.026164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.856 qpair failed and we were unable to recover it. 00:34:12.856 [2024-07-26 01:16:43.026333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.856 [2024-07-26 01:16:43.026377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.856 qpair failed and we were unable to recover it. 00:34:12.856 [2024-07-26 01:16:43.026562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.856 [2024-07-26 01:16:43.026607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.856 qpair failed and we were unable to recover it. 00:34:12.856 [2024-07-26 01:16:43.026756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.856 [2024-07-26 01:16:43.026804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:12.856 qpair failed and we were unable to recover it. 00:34:12.856 [2024-07-26 01:16:43.026943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.856 [2024-07-26 01:16:43.026971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.856 qpair failed and we were unable to recover it. 00:34:12.856 [2024-07-26 01:16:43.027123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.856 [2024-07-26 01:16:43.027151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.856 qpair failed and we were unable to recover it. 00:34:12.856 [2024-07-26 01:16:43.027290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.856 [2024-07-26 01:16:43.027329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.856 qpair failed and we were unable to recover it. 00:34:12.856 [2024-07-26 01:16:43.027534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.856 [2024-07-26 01:16:43.027577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.856 qpair failed and we were unable to recover it. 00:34:12.856 [2024-07-26 01:16:43.027738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.856 [2024-07-26 01:16:43.027768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.856 qpair failed and we were unable to recover it. 00:34:12.856 [2024-07-26 01:16:43.027945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.856 [2024-07-26 01:16:43.027975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.856 qpair failed and we were unable to recover it. 00:34:12.856 [2024-07-26 01:16:43.028128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.856 [2024-07-26 01:16:43.028156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.856 qpair failed and we were unable to recover it. 00:34:12.856 [2024-07-26 01:16:43.028319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.856 [2024-07-26 01:16:43.028345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.856 qpair failed and we were unable to recover it. 00:34:12.856 [2024-07-26 01:16:43.028536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.856 [2024-07-26 01:16:43.028565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.856 qpair failed and we were unable to recover it. 00:34:12.856 [2024-07-26 01:16:43.028715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.856 [2024-07-26 01:16:43.028746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.856 qpair failed and we were unable to recover it. 00:34:12.856 [2024-07-26 01:16:43.028890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.857 [2024-07-26 01:16:43.028930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.857 qpair failed and we were unable to recover it. 00:34:12.857 [2024-07-26 01:16:43.029084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.857 [2024-07-26 01:16:43.029131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.857 qpair failed and we were unable to recover it. 00:34:12.857 [2024-07-26 01:16:43.029291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.857 [2024-07-26 01:16:43.029326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.857 qpair failed and we were unable to recover it. 00:34:12.857 [2024-07-26 01:16:43.029443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.857 [2024-07-26 01:16:43.029470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.857 qpair failed and we were unable to recover it. 00:34:12.857 [2024-07-26 01:16:43.029606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.857 [2024-07-26 01:16:43.029634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.857 qpair failed and we were unable to recover it. 00:34:12.857 [2024-07-26 01:16:43.029801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.857 [2024-07-26 01:16:43.029830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.857 qpair failed and we were unable to recover it. 00:34:12.857 [2024-07-26 01:16:43.029980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.857 [2024-07-26 01:16:43.030010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.857 qpair failed and we were unable to recover it. 00:34:12.857 [2024-07-26 01:16:43.030206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.857 [2024-07-26 01:16:43.030234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.857 qpair failed and we were unable to recover it. 00:34:12.857 [2024-07-26 01:16:43.031023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.857 [2024-07-26 01:16:43.031057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.857 qpair failed and we were unable to recover it. 00:34:12.857 [2024-07-26 01:16:43.031264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.857 [2024-07-26 01:16:43.031292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.857 qpair failed and we were unable to recover it. 00:34:12.857 [2024-07-26 01:16:43.031464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.857 [2024-07-26 01:16:43.031494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.857 qpair failed and we were unable to recover it. 00:34:12.857 [2024-07-26 01:16:43.031617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.857 [2024-07-26 01:16:43.031660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.857 qpair failed and we were unable to recover it. 00:34:12.857 [2024-07-26 01:16:43.031810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.857 [2024-07-26 01:16:43.031839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.857 qpair failed and we were unable to recover it. 00:34:12.857 [2024-07-26 01:16:43.031990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.857 [2024-07-26 01:16:43.032019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.857 qpair failed and we were unable to recover it. 00:34:12.857 [2024-07-26 01:16:43.032163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.857 [2024-07-26 01:16:43.032191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.857 qpair failed and we were unable to recover it. 00:34:12.857 [2024-07-26 01:16:43.032290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.857 [2024-07-26 01:16:43.032328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.857 qpair failed and we were unable to recover it. 00:34:12.857 [2024-07-26 01:16:43.032496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.857 [2024-07-26 01:16:43.032525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.857 qpair failed and we were unable to recover it. 00:34:12.857 [2024-07-26 01:16:43.032694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.857 [2024-07-26 01:16:43.032725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.857 qpair failed and we were unable to recover it. 00:34:12.857 [2024-07-26 01:16:43.032879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.857 [2024-07-26 01:16:43.032908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.857 qpair failed and we were unable to recover it. 00:34:12.857 [2024-07-26 01:16:43.033075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.857 [2024-07-26 01:16:43.033103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.857 qpair failed and we were unable to recover it. 00:34:12.857 [2024-07-26 01:16:43.033238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.857 [2024-07-26 01:16:43.033266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.857 qpair failed and we were unable to recover it. 00:34:12.857 [2024-07-26 01:16:43.033415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.857 [2024-07-26 01:16:43.033444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.857 qpair failed and we were unable to recover it. 00:34:12.857 [2024-07-26 01:16:43.033558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.857 [2024-07-26 01:16:43.033588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.857 qpair failed and we were unable to recover it. 00:34:12.857 [2024-07-26 01:16:43.033792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.857 [2024-07-26 01:16:43.033822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.857 qpair failed and we were unable to recover it. 00:34:12.857 [2024-07-26 01:16:43.033976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.857 [2024-07-26 01:16:43.034006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.857 qpair failed and we were unable to recover it. 00:34:12.857 [2024-07-26 01:16:43.034172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.857 [2024-07-26 01:16:43.034199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.857 qpair failed and we were unable to recover it. 00:34:12.857 [2024-07-26 01:16:43.034337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.857 [2024-07-26 01:16:43.034363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.857 qpair failed and we were unable to recover it. 00:34:12.857 [2024-07-26 01:16:43.034525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.857 [2024-07-26 01:16:43.034555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.857 qpair failed and we were unable to recover it. 00:34:12.857 [2024-07-26 01:16:43.034731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.857 [2024-07-26 01:16:43.034761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.857 qpair failed and we were unable to recover it. 00:34:12.857 [2024-07-26 01:16:43.034940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.857 [2024-07-26 01:16:43.034987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.857 qpair failed and we were unable to recover it. 00:34:12.857 [2024-07-26 01:16:43.035135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.857 [2024-07-26 01:16:43.035161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.857 qpair failed and we were unable to recover it. 00:34:12.857 [2024-07-26 01:16:43.035271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.857 [2024-07-26 01:16:43.035299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.857 qpair failed and we were unable to recover it. 00:34:12.857 [2024-07-26 01:16:43.035428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.857 [2024-07-26 01:16:43.035457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.858 qpair failed and we were unable to recover it. 00:34:12.858 [2024-07-26 01:16:43.035605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.858 [2024-07-26 01:16:43.035634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.858 qpair failed and we were unable to recover it. 00:34:12.858 [2024-07-26 01:16:43.035782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.858 [2024-07-26 01:16:43.035811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.858 qpair failed and we were unable to recover it. 00:34:12.858 [2024-07-26 01:16:43.035967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.858 [2024-07-26 01:16:43.035993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.858 qpair failed and we were unable to recover it. 00:34:12.858 [2024-07-26 01:16:43.036111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.858 [2024-07-26 01:16:43.036137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.858 qpair failed and we were unable to recover it. 00:34:12.858 [2024-07-26 01:16:43.036242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.858 [2024-07-26 01:16:43.036268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.858 qpair failed and we were unable to recover it. 00:34:12.858 [2024-07-26 01:16:43.036475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.858 [2024-07-26 01:16:43.036504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.858 qpair failed and we were unable to recover it. 00:34:12.858 [2024-07-26 01:16:43.036657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.858 [2024-07-26 01:16:43.036688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.858 qpair failed and we were unable to recover it. 00:34:12.858 [2024-07-26 01:16:43.036814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.858 [2024-07-26 01:16:43.036841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.858 qpair failed and we were unable to recover it. 00:34:12.858 [2024-07-26 01:16:43.037011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.858 [2024-07-26 01:16:43.037038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.858 qpair failed and we were unable to recover it. 00:34:12.858 [2024-07-26 01:16:43.037189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.858 [2024-07-26 01:16:43.037215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.858 qpair failed and we were unable to recover it. 00:34:12.858 [2024-07-26 01:16:43.037370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.858 [2024-07-26 01:16:43.037399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.858 qpair failed and we were unable to recover it. 00:34:12.858 [2024-07-26 01:16:43.037541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.858 [2024-07-26 01:16:43.037572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.858 qpair failed and we were unable to recover it. 00:34:12.858 [2024-07-26 01:16:43.037727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.858 [2024-07-26 01:16:43.037757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.858 qpair failed and we were unable to recover it. 00:34:12.858 [2024-07-26 01:16:43.037884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.858 [2024-07-26 01:16:43.037913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.858 qpair failed and we were unable to recover it. 00:34:12.858 [2024-07-26 01:16:43.038073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.858 [2024-07-26 01:16:43.038112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.858 qpair failed and we were unable to recover it. 00:34:12.858 [2024-07-26 01:16:43.038215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.858 [2024-07-26 01:16:43.038241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.858 qpair failed and we were unable to recover it. 00:34:12.858 [2024-07-26 01:16:43.038346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.858 [2024-07-26 01:16:43.038374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.858 qpair failed and we were unable to recover it. 00:34:12.858 [2024-07-26 01:16:43.038513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.858 [2024-07-26 01:16:43.038543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.858 qpair failed and we were unable to recover it. 00:34:12.858 [2024-07-26 01:16:43.038723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.858 [2024-07-26 01:16:43.038752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.858 qpair failed and we were unable to recover it. 00:34:12.858 [2024-07-26 01:16:43.038908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.858 [2024-07-26 01:16:43.038935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.858 qpair failed and we were unable to recover it. 00:34:12.858 [2024-07-26 01:16:43.039110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.858 [2024-07-26 01:16:43.039138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.858 qpair failed and we were unable to recover it. 00:34:12.858 [2024-07-26 01:16:43.039252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.858 [2024-07-26 01:16:43.039278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.858 qpair failed and we were unable to recover it. 00:34:12.858 [2024-07-26 01:16:43.039451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.858 [2024-07-26 01:16:43.039481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.858 qpair failed and we were unable to recover it. 00:34:12.858 [2024-07-26 01:16:43.039664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.858 [2024-07-26 01:16:43.039695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.858 qpair failed and we were unable to recover it. 00:34:12.858 [2024-07-26 01:16:43.039844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.858 [2024-07-26 01:16:43.039874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.858 qpair failed and we were unable to recover it. 00:34:12.858 [2024-07-26 01:16:43.040013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.858 [2024-07-26 01:16:43.040043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.858 qpair failed and we were unable to recover it. 00:34:12.858 [2024-07-26 01:16:43.040197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.858 [2024-07-26 01:16:43.040223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.858 qpair failed and we were unable to recover it. 00:34:12.858 [2024-07-26 01:16:43.040339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.858 [2024-07-26 01:16:43.040366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.858 qpair failed and we were unable to recover it. 00:34:12.858 [2024-07-26 01:16:43.040547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.858 [2024-07-26 01:16:43.040577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.858 qpair failed and we were unable to recover it. 00:34:12.858 [2024-07-26 01:16:43.040787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.858 [2024-07-26 01:16:43.040817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.858 qpair failed and we were unable to recover it. 00:34:12.858 [2024-07-26 01:16:43.040990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.858 [2024-07-26 01:16:43.041018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.858 qpair failed and we were unable to recover it. 00:34:12.858 [2024-07-26 01:16:43.041164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.858 [2024-07-26 01:16:43.041191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.858 qpair failed and we were unable to recover it. 00:34:12.858 [2024-07-26 01:16:43.041327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.858 [2024-07-26 01:16:43.041353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.858 qpair failed and we were unable to recover it. 00:34:12.858 [2024-07-26 01:16:43.041536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.858 [2024-07-26 01:16:43.041565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.858 qpair failed and we were unable to recover it. 00:34:12.858 [2024-07-26 01:16:43.041737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.858 [2024-07-26 01:16:43.041767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.858 qpair failed and we were unable to recover it. 00:34:12.858 [2024-07-26 01:16:43.041932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.858 [2024-07-26 01:16:43.041963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.858 qpair failed and we were unable to recover it. 00:34:12.858 [2024-07-26 01:16:43.042084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.858 [2024-07-26 01:16:43.042132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.858 qpair failed and we were unable to recover it. 00:34:12.858 [2024-07-26 01:16:43.042243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.859 [2024-07-26 01:16:43.042270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.859 qpair failed and we were unable to recover it. 00:34:12.859 [2024-07-26 01:16:43.042381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.859 [2024-07-26 01:16:43.042409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.859 qpair failed and we were unable to recover it. 00:34:12.859 [2024-07-26 01:16:43.042549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.859 [2024-07-26 01:16:43.042601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.859 qpair failed and we were unable to recover it. 00:34:12.859 [2024-07-26 01:16:43.042786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.859 [2024-07-26 01:16:43.042816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.859 qpair failed and we were unable to recover it. 00:34:12.859 [2024-07-26 01:16:43.042965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.859 [2024-07-26 01:16:43.043004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.859 qpair failed and we were unable to recover it. 00:34:12.859 [2024-07-26 01:16:43.043164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.859 [2024-07-26 01:16:43.043192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.859 qpair failed and we were unable to recover it. 00:34:12.859 [2024-07-26 01:16:43.043300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.859 [2024-07-26 01:16:43.043343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.859 qpair failed and we were unable to recover it. 00:34:12.859 [2024-07-26 01:16:43.043531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.859 [2024-07-26 01:16:43.043559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.859 qpair failed and we were unable to recover it. 00:34:12.859 [2024-07-26 01:16:43.043743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.859 [2024-07-26 01:16:43.043773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.859 qpair failed and we were unable to recover it. 00:34:12.859 [2024-07-26 01:16:43.043922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.859 [2024-07-26 01:16:43.043952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.859 qpair failed and we were unable to recover it. 00:34:12.859 [2024-07-26 01:16:43.044114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.859 [2024-07-26 01:16:43.044141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.859 qpair failed and we were unable to recover it. 00:34:12.859 [2024-07-26 01:16:43.044254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.859 [2024-07-26 01:16:43.044281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.859 qpair failed and we were unable to recover it. 00:34:12.859 [2024-07-26 01:16:43.044446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.859 [2024-07-26 01:16:43.044477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.859 qpair failed and we were unable to recover it. 00:34:12.859 [2024-07-26 01:16:43.044650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.859 [2024-07-26 01:16:43.044688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.859 qpair failed and we were unable to recover it. 00:34:12.859 [2024-07-26 01:16:43.044849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.859 [2024-07-26 01:16:43.044894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.859 qpair failed and we were unable to recover it. 00:34:12.859 [2024-07-26 01:16:43.045066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.859 [2024-07-26 01:16:43.045092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.859 qpair failed and we were unable to recover it. 00:34:12.859 [2024-07-26 01:16:43.045212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.859 [2024-07-26 01:16:43.045239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.859 qpair failed and we were unable to recover it. 00:34:12.859 [2024-07-26 01:16:43.045347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.859 [2024-07-26 01:16:43.045374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.859 qpair failed and we were unable to recover it. 00:34:12.859 [2024-07-26 01:16:43.045508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.859 [2024-07-26 01:16:43.045535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.859 qpair failed and we were unable to recover it. 00:34:12.859 [2024-07-26 01:16:43.045698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.859 [2024-07-26 01:16:43.045725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.859 qpair failed and we were unable to recover it. 00:34:12.859 [2024-07-26 01:16:43.045882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.859 [2024-07-26 01:16:43.045912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.859 qpair failed and we were unable to recover it. 00:34:12.859 [2024-07-26 01:16:43.046091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.859 [2024-07-26 01:16:43.046133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.859 qpair failed and we were unable to recover it. 00:34:12.859 [2024-07-26 01:16:43.046245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.859 [2024-07-26 01:16:43.046272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.859 qpair failed and we were unable to recover it. 00:34:12.859 [2024-07-26 01:16:43.046458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.859 [2024-07-26 01:16:43.046488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.859 qpair failed and we were unable to recover it. 00:34:12.859 [2024-07-26 01:16:43.046633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.859 [2024-07-26 01:16:43.046662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.859 qpair failed and we were unable to recover it. 00:34:12.859 [2024-07-26 01:16:43.046813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.859 [2024-07-26 01:16:43.046840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.859 qpair failed and we were unable to recover it. 00:34:12.859 [2024-07-26 01:16:43.046978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.859 [2024-07-26 01:16:43.047021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.859 qpair failed and we were unable to recover it. 00:34:12.859 [2024-07-26 01:16:43.047203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.859 [2024-07-26 01:16:43.047232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.859 qpair failed and we were unable to recover it. 00:34:12.859 [2024-07-26 01:16:43.047401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.859 [2024-07-26 01:16:43.047427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.859 qpair failed and we were unable to recover it. 00:34:12.859 [2024-07-26 01:16:43.047547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.859 [2024-07-26 01:16:43.047574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.859 qpair failed and we were unable to recover it. 00:34:12.859 [2024-07-26 01:16:43.047713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.859 [2024-07-26 01:16:43.047739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.859 qpair failed and we were unable to recover it. 00:34:12.859 [2024-07-26 01:16:43.047920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.859 [2024-07-26 01:16:43.047947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.859 qpair failed and we were unable to recover it. 00:34:12.859 [2024-07-26 01:16:43.048106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.859 [2024-07-26 01:16:43.048137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.859 qpair failed and we were unable to recover it. 00:34:12.859 [2024-07-26 01:16:43.048278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.859 [2024-07-26 01:16:43.048308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.859 qpair failed and we were unable to recover it. 00:34:12.859 [2024-07-26 01:16:43.048466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.859 [2024-07-26 01:16:43.048493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.859 qpair failed and we were unable to recover it. 00:34:12.859 [2024-07-26 01:16:43.048635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.859 [2024-07-26 01:16:43.048663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.859 qpair failed and we were unable to recover it. 00:34:12.859 [2024-07-26 01:16:43.048801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.859 [2024-07-26 01:16:43.048831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.859 qpair failed and we were unable to recover it. 00:34:12.859 [2024-07-26 01:16:43.048979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.860 [2024-07-26 01:16:43.049005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.860 qpair failed and we were unable to recover it. 00:34:12.860 [2024-07-26 01:16:43.049186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.860 [2024-07-26 01:16:43.049213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.860 qpair failed and we were unable to recover it. 00:34:12.860 [2024-07-26 01:16:43.049326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.860 [2024-07-26 01:16:43.049358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.860 qpair failed and we were unable to recover it. 00:34:12.860 [2024-07-26 01:16:43.049547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.860 [2024-07-26 01:16:43.049574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.860 qpair failed and we were unable to recover it. 00:34:12.860 [2024-07-26 01:16:43.049727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.860 [2024-07-26 01:16:43.049756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.860 qpair failed and we were unable to recover it. 00:34:12.860 [2024-07-26 01:16:43.049883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.860 [2024-07-26 01:16:43.049913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.860 qpair failed and we were unable to recover it. 00:34:12.860 [2024-07-26 01:16:43.050049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.860 [2024-07-26 01:16:43.050081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.860 qpair failed and we were unable to recover it. 00:34:12.860 [2024-07-26 01:16:43.050232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.860 [2024-07-26 01:16:43.050258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.860 qpair failed and we were unable to recover it. 00:34:12.860 [2024-07-26 01:16:43.050422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.860 [2024-07-26 01:16:43.050452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.860 qpair failed and we were unable to recover it. 00:34:12.860 [2024-07-26 01:16:43.050602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.860 [2024-07-26 01:16:43.050628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.860 qpair failed and we were unable to recover it. 00:34:12.860 [2024-07-26 01:16:43.050802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.860 [2024-07-26 01:16:43.050831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.860 qpair failed and we were unable to recover it. 00:34:12.860 [2024-07-26 01:16:43.051004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.860 [2024-07-26 01:16:43.051033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.860 qpair failed and we were unable to recover it. 00:34:12.860 [2024-07-26 01:16:43.051181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.860 [2024-07-26 01:16:43.051209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.860 qpair failed and we were unable to recover it. 00:34:12.860 [2024-07-26 01:16:43.051318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.860 [2024-07-26 01:16:43.051345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.860 qpair failed and we were unable to recover it. 00:34:12.860 [2024-07-26 01:16:43.051506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.860 [2024-07-26 01:16:43.051532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.860 qpair failed and we were unable to recover it. 00:34:12.860 [2024-07-26 01:16:43.051683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.860 [2024-07-26 01:16:43.051711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.860 qpair failed and we were unable to recover it. 00:34:12.860 [2024-07-26 01:16:43.051851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.860 [2024-07-26 01:16:43.051877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.860 qpair failed and we were unable to recover it. 00:34:12.860 [2024-07-26 01:16:43.052012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.860 [2024-07-26 01:16:43.052039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.860 qpair failed and we were unable to recover it. 00:34:12.860 [2024-07-26 01:16:43.052162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.860 [2024-07-26 01:16:43.052190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.860 qpair failed and we were unable to recover it. 00:34:12.860 [2024-07-26 01:16:43.052325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.860 [2024-07-26 01:16:43.052351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.860 qpair failed and we were unable to recover it. 00:34:12.860 [2024-07-26 01:16:43.052506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.860 [2024-07-26 01:16:43.052536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.860 qpair failed and we were unable to recover it. 00:34:12.860 [2024-07-26 01:16:43.052701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.860 [2024-07-26 01:16:43.052729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.860 qpair failed and we were unable to recover it. 00:34:12.860 [2024-07-26 01:16:43.052848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.860 [2024-07-26 01:16:43.052892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.860 qpair failed and we were unable to recover it. 00:34:12.860 [2024-07-26 01:16:43.053009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.860 [2024-07-26 01:16:43.053038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.860 qpair failed and we were unable to recover it. 00:34:12.860 [2024-07-26 01:16:43.053182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.860 [2024-07-26 01:16:43.053210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.860 qpair failed and we were unable to recover it. 00:34:12.860 [2024-07-26 01:16:43.053324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.860 [2024-07-26 01:16:43.053350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.860 qpair failed and we were unable to recover it. 00:34:12.860 [2024-07-26 01:16:43.053505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.860 [2024-07-26 01:16:43.053535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.860 qpair failed and we were unable to recover it. 00:34:12.860 [2024-07-26 01:16:43.053692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.860 [2024-07-26 01:16:43.053719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.860 qpair failed and we were unable to recover it. 00:34:12.860 [2024-07-26 01:16:43.053863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.860 [2024-07-26 01:16:43.053890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.860 qpair failed and we were unable to recover it. 00:34:12.860 [2024-07-26 01:16:43.054037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.860 [2024-07-26 01:16:43.054087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.860 qpair failed and we were unable to recover it. 00:34:12.860 [2024-07-26 01:16:43.054250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.860 [2024-07-26 01:16:43.054277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.860 qpair failed and we were unable to recover it. 00:34:12.860 [2024-07-26 01:16:43.054383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.860 [2024-07-26 01:16:43.054410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.860 qpair failed and we were unable to recover it. 00:34:12.860 [2024-07-26 01:16:43.054584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.860 [2024-07-26 01:16:43.054611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.860 qpair failed and we were unable to recover it. 00:34:12.860 [2024-07-26 01:16:43.054770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.860 [2024-07-26 01:16:43.054797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.860 qpair failed and we were unable to recover it. 00:34:12.860 [2024-07-26 01:16:43.054976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.860 [2024-07-26 01:16:43.055005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.860 qpair failed and we were unable to recover it. 00:34:12.860 [2024-07-26 01:16:43.055130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.860 [2024-07-26 01:16:43.055161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.860 qpair failed and we were unable to recover it. 00:34:12.860 [2024-07-26 01:16:43.055296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.860 [2024-07-26 01:16:43.055322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.860 qpair failed and we were unable to recover it. 00:34:12.860 [2024-07-26 01:16:43.055433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.860 [2024-07-26 01:16:43.055459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.860 qpair failed and we were unable to recover it. 00:34:12.861 [2024-07-26 01:16:43.055582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.861 [2024-07-26 01:16:43.055614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.861 qpair failed and we were unable to recover it. 00:34:12.861 [2024-07-26 01:16:43.055764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.861 [2024-07-26 01:16:43.055792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.861 qpair failed and we were unable to recover it. 00:34:12.861 [2024-07-26 01:16:43.055926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.861 [2024-07-26 01:16:43.055967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.861 qpair failed and we were unable to recover it. 00:34:12.861 [2024-07-26 01:16:43.056121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.861 [2024-07-26 01:16:43.056152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.861 qpair failed and we were unable to recover it. 00:34:12.861 [2024-07-26 01:16:43.056290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.861 [2024-07-26 01:16:43.056322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.861 qpair failed and we were unable to recover it. 00:34:12.861 [2024-07-26 01:16:43.056499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.861 [2024-07-26 01:16:43.056528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.861 qpair failed and we were unable to recover it. 00:34:12.861 [2024-07-26 01:16:43.056677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.861 [2024-07-26 01:16:43.056706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.861 qpair failed and we were unable to recover it. 00:34:12.861 [2024-07-26 01:16:43.056856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.861 [2024-07-26 01:16:43.056884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.861 qpair failed and we were unable to recover it. 00:34:12.861 [2024-07-26 01:16:43.057011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.861 [2024-07-26 01:16:43.057038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.861 qpair failed and we were unable to recover it. 00:34:12.861 [2024-07-26 01:16:43.057159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.861 [2024-07-26 01:16:43.057188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.861 qpair failed and we were unable to recover it. 00:34:12.861 [2024-07-26 01:16:43.057322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.861 [2024-07-26 01:16:43.057349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.861 qpair failed and we were unable to recover it. 00:34:12.861 [2024-07-26 01:16:43.057527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.861 [2024-07-26 01:16:43.057557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.861 qpair failed and we were unable to recover it. 00:34:12.861 [2024-07-26 01:16:43.057719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.861 [2024-07-26 01:16:43.057766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.861 qpair failed and we were unable to recover it. 00:34:12.861 [2024-07-26 01:16:43.057890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.861 [2024-07-26 01:16:43.057927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.861 qpair failed and we were unable to recover it. 00:34:12.861 [2024-07-26 01:16:43.058080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.861 [2024-07-26 01:16:43.058126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.861 qpair failed and we were unable to recover it. 00:34:12.861 [2024-07-26 01:16:43.058239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.861 [2024-07-26 01:16:43.058268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.861 qpair failed and we were unable to recover it. 00:34:12.861 [2024-07-26 01:16:43.058395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.861 [2024-07-26 01:16:43.058423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.861 qpair failed and we were unable to recover it. 00:34:12.861 [2024-07-26 01:16:43.058592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.861 [2024-07-26 01:16:43.058636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.861 qpair failed and we were unable to recover it. 00:34:12.861 [2024-07-26 01:16:43.058851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.861 [2024-07-26 01:16:43.058881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.861 qpair failed and we were unable to recover it. 00:34:12.861 [2024-07-26 01:16:43.059124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.861 [2024-07-26 01:16:43.059151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.861 qpair failed and we were unable to recover it. 00:34:12.861 [2024-07-26 01:16:43.059290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.861 [2024-07-26 01:16:43.059317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.861 qpair failed and we were unable to recover it. 00:34:12.861 [2024-07-26 01:16:43.059491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.861 [2024-07-26 01:16:43.059539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.861 qpair failed and we were unable to recover it. 00:34:12.861 [2024-07-26 01:16:43.059700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.861 [2024-07-26 01:16:43.059727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.861 qpair failed and we were unable to recover it. 00:34:12.861 [2024-07-26 01:16:43.059901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.861 [2024-07-26 01:16:43.059931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.861 qpair failed and we were unable to recover it. 00:34:12.861 [2024-07-26 01:16:43.060118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.861 [2024-07-26 01:16:43.060164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.861 qpair failed and we were unable to recover it. 00:34:12.861 [2024-07-26 01:16:43.060328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.861 [2024-07-26 01:16:43.060357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.861 qpair failed and we were unable to recover it. 00:34:12.861 [2024-07-26 01:16:43.060498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.861 [2024-07-26 01:16:43.060526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.861 qpair failed and we were unable to recover it. 00:34:12.861 [2024-07-26 01:16:43.060637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.861 [2024-07-26 01:16:43.060664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.861 qpair failed and we were unable to recover it. 00:34:12.861 [2024-07-26 01:16:43.060800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.861 [2024-07-26 01:16:43.060828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.861 qpair failed and we were unable to recover it. 00:34:12.861 [2024-07-26 01:16:43.060957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.861 [2024-07-26 01:16:43.060984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.861 qpair failed and we were unable to recover it. 00:34:12.861 [2024-07-26 01:16:43.061171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.861 [2024-07-26 01:16:43.061202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.861 qpair failed and we were unable to recover it. 00:34:12.861 [2024-07-26 01:16:43.061392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.861 [2024-07-26 01:16:43.061419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.861 qpair failed and we were unable to recover it. 00:34:12.862 [2024-07-26 01:16:43.061572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.862 [2024-07-26 01:16:43.061602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.862 qpair failed and we were unable to recover it. 00:34:12.862 [2024-07-26 01:16:43.061722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.862 [2024-07-26 01:16:43.061753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.862 qpair failed and we were unable to recover it. 00:34:12.862 [2024-07-26 01:16:43.061911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.862 [2024-07-26 01:16:43.061938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.862 qpair failed and we were unable to recover it. 00:34:12.862 [2024-07-26 01:16:43.062072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.862 [2024-07-26 01:16:43.062100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.862 qpair failed and we were unable to recover it. 00:34:12.862 [2024-07-26 01:16:43.062272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.862 [2024-07-26 01:16:43.062305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.862 qpair failed and we were unable to recover it. 00:34:12.862 [2024-07-26 01:16:43.062545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.862 [2024-07-26 01:16:43.062572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.862 qpair failed and we were unable to recover it. 00:34:12.862 [2024-07-26 01:16:43.062694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.862 [2024-07-26 01:16:43.062723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.862 qpair failed and we were unable to recover it. 00:34:12.862 [2024-07-26 01:16:43.062877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.862 [2024-07-26 01:16:43.062907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.862 qpair failed and we were unable to recover it. 00:34:12.862 [2024-07-26 01:16:43.063071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.862 [2024-07-26 01:16:43.063098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.862 qpair failed and we were unable to recover it. 00:34:12.862 [2024-07-26 01:16:43.063233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.862 [2024-07-26 01:16:43.063279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.862 qpair failed and we were unable to recover it. 00:34:12.862 [2024-07-26 01:16:43.063467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.862 [2024-07-26 01:16:43.063494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.862 qpair failed and we were unable to recover it. 00:34:12.862 [2024-07-26 01:16:43.063604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.862 [2024-07-26 01:16:43.063632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.862 qpair failed and we were unable to recover it. 00:34:12.862 [2024-07-26 01:16:43.063772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.862 [2024-07-26 01:16:43.063803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.862 qpair failed and we were unable to recover it. 00:34:12.862 [2024-07-26 01:16:43.063963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.862 [2024-07-26 01:16:43.063994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.862 qpair failed and we were unable to recover it. 00:34:12.862 [2024-07-26 01:16:43.064177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.862 [2024-07-26 01:16:43.064205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.862 qpair failed and we were unable to recover it. 00:34:12.862 [2024-07-26 01:16:43.064317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.862 [2024-07-26 01:16:43.064361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.862 qpair failed and we were unable to recover it. 00:34:12.862 [2024-07-26 01:16:43.064564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.862 [2024-07-26 01:16:43.064612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.862 qpair failed and we were unable to recover it. 00:34:12.862 [2024-07-26 01:16:43.064794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.862 [2024-07-26 01:16:43.064821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.862 qpair failed and we were unable to recover it. 00:34:12.862 [2024-07-26 01:16:43.065004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.862 [2024-07-26 01:16:43.065034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.862 qpair failed and we were unable to recover it. 00:34:12.862 [2024-07-26 01:16:43.065165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.862 [2024-07-26 01:16:43.065196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.862 qpair failed and we were unable to recover it. 00:34:12.862 [2024-07-26 01:16:43.065359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.862 [2024-07-26 01:16:43.065386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.862 qpair failed and we were unable to recover it. 00:34:12.862 [2024-07-26 01:16:43.065522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.862 [2024-07-26 01:16:43.065550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.862 qpair failed and we were unable to recover it. 00:34:12.862 [2024-07-26 01:16:43.065788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.862 [2024-07-26 01:16:43.065840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.862 qpair failed and we were unable to recover it. 00:34:12.862 [2024-07-26 01:16:43.065992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.862 [2024-07-26 01:16:43.066028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.862 qpair failed and we were unable to recover it. 00:34:12.862 [2024-07-26 01:16:43.066282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.862 [2024-07-26 01:16:43.066309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.862 qpair failed and we were unable to recover it. 00:34:12.862 [2024-07-26 01:16:43.066497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.862 [2024-07-26 01:16:43.066524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.862 qpair failed and we were unable to recover it. 00:34:12.862 [2024-07-26 01:16:43.066663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.862 [2024-07-26 01:16:43.066690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.862 qpair failed and we were unable to recover it. 00:34:12.862 [2024-07-26 01:16:43.066848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.862 [2024-07-26 01:16:43.066875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.862 qpair failed and we were unable to recover it. 00:34:12.862 [2024-07-26 01:16:43.067029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.862 [2024-07-26 01:16:43.067056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.862 qpair failed and we were unable to recover it. 00:34:12.862 [2024-07-26 01:16:43.067175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.862 [2024-07-26 01:16:43.067202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.862 qpair failed and we were unable to recover it. 00:34:12.862 [2024-07-26 01:16:43.067339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.862 [2024-07-26 01:16:43.067365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.862 qpair failed and we were unable to recover it. 00:34:12.862 [2024-07-26 01:16:43.067524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.862 [2024-07-26 01:16:43.067553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.862 qpair failed and we were unable to recover it. 00:34:12.862 [2024-07-26 01:16:43.067712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.862 [2024-07-26 01:16:43.067739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.862 qpair failed and we were unable to recover it. 00:34:12.862 [2024-07-26 01:16:43.067888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.862 [2024-07-26 01:16:43.067919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.862 qpair failed and we were unable to recover it. 00:34:12.862 [2024-07-26 01:16:43.068072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.862 [2024-07-26 01:16:43.068102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.862 qpair failed and we were unable to recover it. 00:34:12.862 [2024-07-26 01:16:43.068230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.862 [2024-07-26 01:16:43.068257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.862 qpair failed and we were unable to recover it. 00:34:12.862 [2024-07-26 01:16:43.068376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.863 [2024-07-26 01:16:43.068402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.863 qpair failed and we were unable to recover it. 00:34:12.863 [2024-07-26 01:16:43.068564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.863 [2024-07-26 01:16:43.068591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.863 qpair failed and we were unable to recover it. 00:34:12.863 [2024-07-26 01:16:43.068729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.863 [2024-07-26 01:16:43.068756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.863 qpair failed and we were unable to recover it. 00:34:12.863 [2024-07-26 01:16:43.068878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.863 [2024-07-26 01:16:43.068922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.863 qpair failed and we were unable to recover it. 00:34:12.863 [2024-07-26 01:16:43.069073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.863 [2024-07-26 01:16:43.069104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.863 qpair failed and we were unable to recover it. 00:34:12.863 [2024-07-26 01:16:43.069285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.863 [2024-07-26 01:16:43.069312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.863 qpair failed and we were unable to recover it. 00:34:12.863 [2024-07-26 01:16:43.069421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.863 [2024-07-26 01:16:43.069449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.863 qpair failed and we were unable to recover it. 00:34:12.863 [2024-07-26 01:16:43.069612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.863 [2024-07-26 01:16:43.069639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.863 qpair failed and we were unable to recover it. 00:34:12.863 [2024-07-26 01:16:43.069836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.863 [2024-07-26 01:16:43.069863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.863 qpair failed and we were unable to recover it. 00:34:12.863 [2024-07-26 01:16:43.070014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.863 [2024-07-26 01:16:43.070043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.863 qpair failed and we were unable to recover it. 00:34:12.863 [2024-07-26 01:16:43.070172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.863 [2024-07-26 01:16:43.070202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.863 qpair failed and we were unable to recover it. 00:34:12.863 [2024-07-26 01:16:43.070335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.863 [2024-07-26 01:16:43.070362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.863 qpair failed and we were unable to recover it. 00:34:12.863 [2024-07-26 01:16:43.070493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.863 [2024-07-26 01:16:43.070519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.863 qpair failed and we were unable to recover it. 00:34:12.863 [2024-07-26 01:16:43.070719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.863 [2024-07-26 01:16:43.070785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.863 qpair failed and we were unable to recover it. 00:34:12.863 [2024-07-26 01:16:43.070910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.863 [2024-07-26 01:16:43.070940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.863 qpair failed and we were unable to recover it. 00:34:12.863 [2024-07-26 01:16:43.071092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.863 [2024-07-26 01:16:43.071119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.863 qpair failed and we were unable to recover it. 00:34:12.863 [2024-07-26 01:16:43.071266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.863 [2024-07-26 01:16:43.071297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.863 qpair failed and we were unable to recover it. 00:34:12.863 [2024-07-26 01:16:43.071437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.863 [2024-07-26 01:16:43.071464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.863 qpair failed and we were unable to recover it. 00:34:12.863 [2024-07-26 01:16:43.071575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.863 [2024-07-26 01:16:43.071602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.863 qpair failed and we were unable to recover it. 00:34:12.863 [2024-07-26 01:16:43.071716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.863 [2024-07-26 01:16:43.071742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.863 qpair failed and we were unable to recover it. 00:34:12.863 [2024-07-26 01:16:43.071873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.863 [2024-07-26 01:16:43.071900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.863 qpair failed and we were unable to recover it. 00:34:12.863 [2024-07-26 01:16:43.072049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.863 [2024-07-26 01:16:43.072084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.863 qpair failed and we were unable to recover it. 00:34:12.863 [2024-07-26 01:16:43.072259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.863 [2024-07-26 01:16:43.072285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.863 qpair failed and we were unable to recover it. 00:34:12.863 [2024-07-26 01:16:43.072419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.863 [2024-07-26 01:16:43.072445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.863 qpair failed and we were unable to recover it. 00:34:12.863 [2024-07-26 01:16:43.072579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.863 [2024-07-26 01:16:43.072623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.863 qpair failed and we were unable to recover it. 00:34:12.863 [2024-07-26 01:16:43.072781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.863 [2024-07-26 01:16:43.072808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.863 qpair failed and we were unable to recover it. 00:34:12.863 [2024-07-26 01:16:43.072913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.863 [2024-07-26 01:16:43.072939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.863 qpair failed and we were unable to recover it. 00:34:12.863 [2024-07-26 01:16:43.073074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.863 [2024-07-26 01:16:43.073101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.863 qpair failed and we were unable to recover it. 00:34:12.863 [2024-07-26 01:16:43.073238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.863 [2024-07-26 01:16:43.073266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.863 qpair failed and we were unable to recover it. 00:34:12.863 [2024-07-26 01:16:43.073421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.863 [2024-07-26 01:16:43.073454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.863 qpair failed and we were unable to recover it. 00:34:12.863 [2024-07-26 01:16:43.073627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.863 [2024-07-26 01:16:43.073676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.863 qpair failed and we were unable to recover it. 00:34:12.863 [2024-07-26 01:16:43.073803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.863 [2024-07-26 01:16:43.073832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.863 qpair failed and we were unable to recover it. 00:34:12.863 [2024-07-26 01:16:43.073988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.863 [2024-07-26 01:16:43.074015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.863 qpair failed and we were unable to recover it. 00:34:12.863 [2024-07-26 01:16:43.074212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.863 [2024-07-26 01:16:43.074241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.863 qpair failed and we were unable to recover it. 00:34:12.863 [2024-07-26 01:16:43.074361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.863 [2024-07-26 01:16:43.074390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.863 qpair failed and we were unable to recover it. 00:34:12.863 [2024-07-26 01:16:43.074540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.863 [2024-07-26 01:16:43.074566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.863 qpair failed and we were unable to recover it. 00:34:12.863 [2024-07-26 01:16:43.074697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.863 [2024-07-26 01:16:43.074738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.863 qpair failed and we were unable to recover it. 00:34:12.863 [2024-07-26 01:16:43.074884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.864 [2024-07-26 01:16:43.074914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.864 qpair failed and we were unable to recover it. 00:34:12.864 [2024-07-26 01:16:43.075054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.864 [2024-07-26 01:16:43.075089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.864 qpair failed and we were unable to recover it. 00:34:12.864 [2024-07-26 01:16:43.075253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.864 [2024-07-26 01:16:43.075279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.864 qpair failed and we were unable to recover it. 00:34:12.864 [2024-07-26 01:16:43.075414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.864 [2024-07-26 01:16:43.075441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.864 qpair failed and we were unable to recover it. 00:34:12.864 [2024-07-26 01:16:43.075604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.864 [2024-07-26 01:16:43.075632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.864 qpair failed and we were unable to recover it. 00:34:12.864 [2024-07-26 01:16:43.075777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.864 [2024-07-26 01:16:43.075804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.864 qpair failed and we were unable to recover it. 00:34:12.864 [2024-07-26 01:16:43.075979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.864 [2024-07-26 01:16:43.076006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.864 qpair failed and we were unable to recover it. 00:34:12.864 [2024-07-26 01:16:43.076140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.864 [2024-07-26 01:16:43.076167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.864 qpair failed and we were unable to recover it. 00:34:12.864 [2024-07-26 01:16:43.076310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.864 [2024-07-26 01:16:43.076337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.864 qpair failed and we were unable to recover it. 00:34:12.864 [2024-07-26 01:16:43.076523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.864 [2024-07-26 01:16:43.076552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.864 qpair failed and we were unable to recover it. 00:34:12.864 [2024-07-26 01:16:43.076705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.864 [2024-07-26 01:16:43.076732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.864 qpair failed and we were unable to recover it. 00:34:12.864 [2024-07-26 01:16:43.076866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.864 [2024-07-26 01:16:43.076910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.864 qpair failed and we were unable to recover it. 00:34:12.864 [2024-07-26 01:16:43.077070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.864 [2024-07-26 01:16:43.077101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.864 qpair failed and we were unable to recover it. 00:34:12.864 [2024-07-26 01:16:43.077252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.864 [2024-07-26 01:16:43.077281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.864 qpair failed and we were unable to recover it. 00:34:12.864 [2024-07-26 01:16:43.077436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.864 [2024-07-26 01:16:43.077463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.864 qpair failed and we were unable to recover it. 00:34:12.864 [2024-07-26 01:16:43.077622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.864 [2024-07-26 01:16:43.077665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.864 qpair failed and we were unable to recover it. 00:34:12.864 [2024-07-26 01:16:43.077782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.864 [2024-07-26 01:16:43.077811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.864 qpair failed and we were unable to recover it. 00:34:12.864 [2024-07-26 01:16:43.077987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.864 [2024-07-26 01:16:43.078017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.864 qpair failed and we were unable to recover it. 00:34:12.864 [2024-07-26 01:16:43.078180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.864 [2024-07-26 01:16:43.078208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.864 qpair failed and we were unable to recover it. 00:34:12.864 [2024-07-26 01:16:43.078364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.864 [2024-07-26 01:16:43.078398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.864 qpair failed and we were unable to recover it. 00:34:12.864 [2024-07-26 01:16:43.078521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.864 [2024-07-26 01:16:43.078551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.864 qpair failed and we were unable to recover it. 00:34:12.864 [2024-07-26 01:16:43.078675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.864 [2024-07-26 01:16:43.078704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.864 qpair failed and we were unable to recover it. 00:34:12.864 [2024-07-26 01:16:43.078883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.864 [2024-07-26 01:16:43.078909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.864 qpair failed and we were unable to recover it. 00:34:12.864 [2024-07-26 01:16:43.079073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.864 [2024-07-26 01:16:43.079123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.864 qpair failed and we were unable to recover it. 00:34:12.864 [2024-07-26 01:16:43.079239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.864 [2024-07-26 01:16:43.079268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.864 qpair failed and we were unable to recover it. 00:34:12.864 [2024-07-26 01:16:43.079410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.864 [2024-07-26 01:16:43.079439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.864 qpair failed and we were unable to recover it. 00:34:12.864 [2024-07-26 01:16:43.079601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.864 [2024-07-26 01:16:43.079629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.864 qpair failed and we were unable to recover it. 00:34:12.864 [2024-07-26 01:16:43.079761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.864 [2024-07-26 01:16:43.079805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.864 qpair failed and we were unable to recover it. 00:34:12.864 [2024-07-26 01:16:43.079911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.864 [2024-07-26 01:16:43.079940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.864 qpair failed and we were unable to recover it. 00:34:12.864 [2024-07-26 01:16:43.080114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.864 [2024-07-26 01:16:43.080145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.864 qpair failed and we were unable to recover it. 00:34:12.864 [2024-07-26 01:16:43.080273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.864 [2024-07-26 01:16:43.080299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.864 qpair failed and we were unable to recover it. 00:34:12.864 [2024-07-26 01:16:43.080437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.864 [2024-07-26 01:16:43.080463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.864 qpair failed and we were unable to recover it. 00:34:12.864 [2024-07-26 01:16:43.080656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.864 [2024-07-26 01:16:43.080686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.864 qpair failed and we were unable to recover it. 00:34:12.864 [2024-07-26 01:16:43.080836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.864 [2024-07-26 01:16:43.080866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.864 qpair failed and we were unable to recover it. 00:34:12.864 [2024-07-26 01:16:43.080986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.864 [2024-07-26 01:16:43.081016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.864 qpair failed and we were unable to recover it. 00:34:12.864 [2024-07-26 01:16:43.081245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.864 [2024-07-26 01:16:43.081272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.864 qpair failed and we were unable to recover it. 00:34:12.864 [2024-07-26 01:16:43.081469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.864 [2024-07-26 01:16:43.081516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.864 qpair failed and we were unable to recover it. 00:34:12.864 [2024-07-26 01:16:43.081670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.865 [2024-07-26 01:16:43.081700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.865 qpair failed and we were unable to recover it. 00:34:12.865 [2024-07-26 01:16:43.081846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.865 [2024-07-26 01:16:43.081875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.865 qpair failed and we were unable to recover it. 00:34:12.865 [2024-07-26 01:16:43.082015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.865 [2024-07-26 01:16:43.082044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.865 qpair failed and we were unable to recover it. 00:34:12.865 [2024-07-26 01:16:43.082175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.865 [2024-07-26 01:16:43.082202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.865 qpair failed and we were unable to recover it. 00:34:12.865 [2024-07-26 01:16:43.082320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.865 [2024-07-26 01:16:43.082364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.865 qpair failed and we were unable to recover it. 00:34:12.865 [2024-07-26 01:16:43.082490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.865 [2024-07-26 01:16:43.082517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.865 qpair failed and we were unable to recover it. 00:34:12.865 [2024-07-26 01:16:43.082655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.865 [2024-07-26 01:16:43.082682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.865 qpair failed and we were unable to recover it. 00:34:12.865 [2024-07-26 01:16:43.082840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.865 [2024-07-26 01:16:43.082867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.865 qpair failed and we were unable to recover it. 00:34:12.865 [2024-07-26 01:16:43.083024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.865 [2024-07-26 01:16:43.083054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.865 qpair failed and we were unable to recover it. 00:34:12.865 [2024-07-26 01:16:43.083283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.865 [2024-07-26 01:16:43.083310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.865 qpair failed and we were unable to recover it. 00:34:12.865 [2024-07-26 01:16:43.083495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.865 [2024-07-26 01:16:43.083526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.865 qpair failed and we were unable to recover it. 00:34:12.865 [2024-07-26 01:16:43.083675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.865 [2024-07-26 01:16:43.083723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.865 qpair failed and we were unable to recover it. 00:34:12.865 [2024-07-26 01:16:43.083867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.865 [2024-07-26 01:16:43.083898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.865 qpair failed and we were unable to recover it. 00:34:12.865 [2024-07-26 01:16:43.084077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.865 [2024-07-26 01:16:43.084104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.865 qpair failed and we were unable to recover it. 00:34:12.865 [2024-07-26 01:16:43.084210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.865 [2024-07-26 01:16:43.084258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.865 qpair failed and we were unable to recover it. 00:34:12.865 [2024-07-26 01:16:43.084407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.865 [2024-07-26 01:16:43.084437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.865 qpair failed and we were unable to recover it. 00:34:12.865 [2024-07-26 01:16:43.084575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.865 [2024-07-26 01:16:43.084604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.865 qpair failed and we were unable to recover it. 00:34:12.865 [2024-07-26 01:16:43.084762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.865 [2024-07-26 01:16:43.084788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.865 qpair failed and we were unable to recover it. 00:34:12.865 [2024-07-26 01:16:43.084898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.865 [2024-07-26 01:16:43.084943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.865 qpair failed and we were unable to recover it. 00:34:12.865 [2024-07-26 01:16:43.085084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.865 [2024-07-26 01:16:43.085114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.865 qpair failed and we were unable to recover it. 00:34:12.865 [2024-07-26 01:16:43.085264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.865 [2024-07-26 01:16:43.085293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.865 qpair failed and we were unable to recover it. 00:34:12.865 [2024-07-26 01:16:43.085454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.865 [2024-07-26 01:16:43.085480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.865 qpair failed and we were unable to recover it. 00:34:12.865 [2024-07-26 01:16:43.085643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.865 [2024-07-26 01:16:43.085673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.865 qpair failed and we were unable to recover it. 00:34:12.865 [2024-07-26 01:16:43.085833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.865 [2024-07-26 01:16:43.085859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.865 qpair failed and we were unable to recover it. 00:34:12.865 [2024-07-26 01:16:43.085998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.865 [2024-07-26 01:16:43.086024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.865 qpair failed and we were unable to recover it. 00:34:12.865 [2024-07-26 01:16:43.086164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.865 [2024-07-26 01:16:43.086191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.865 qpair failed and we were unable to recover it. 00:34:12.865 [2024-07-26 01:16:43.086326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.865 [2024-07-26 01:16:43.086353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.865 qpair failed and we were unable to recover it. 00:34:12.865 [2024-07-26 01:16:43.086507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.865 [2024-07-26 01:16:43.086537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.865 qpair failed and we were unable to recover it. 00:34:12.865 [2024-07-26 01:16:43.086696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.865 [2024-07-26 01:16:43.086723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.865 qpair failed and we were unable to recover it. 00:34:12.865 [2024-07-26 01:16:43.086883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.865 [2024-07-26 01:16:43.086909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.865 qpair failed and we were unable to recover it. 00:34:12.865 [2024-07-26 01:16:43.087087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.865 [2024-07-26 01:16:43.087118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.865 qpair failed and we were unable to recover it. 00:34:12.865 [2024-07-26 01:16:43.087331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.865 [2024-07-26 01:16:43.087380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.865 qpair failed and we were unable to recover it. 00:34:12.865 [2024-07-26 01:16:43.087516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.865 [2024-07-26 01:16:43.087543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.865 qpair failed and we were unable to recover it. 00:34:12.865 [2024-07-26 01:16:43.087679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.865 [2024-07-26 01:16:43.087706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.865 qpair failed and we were unable to recover it. 00:34:12.865 [2024-07-26 01:16:43.087843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.865 [2024-07-26 01:16:43.087887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.865 qpair failed and we were unable to recover it. 00:34:12.865 [2024-07-26 01:16:43.088039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.865 [2024-07-26 01:16:43.088077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.865 qpair failed and we were unable to recover it. 00:34:12.865 [2024-07-26 01:16:43.088234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.865 [2024-07-26 01:16:43.088264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.865 qpair failed and we were unable to recover it. 00:34:12.865 [2024-07-26 01:16:43.088418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.866 [2024-07-26 01:16:43.088446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.866 qpair failed and we were unable to recover it. 00:34:12.866 [2024-07-26 01:16:43.088636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.866 [2024-07-26 01:16:43.088665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.866 qpair failed and we were unable to recover it. 00:34:12.866 [2024-07-26 01:16:43.088845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.866 [2024-07-26 01:16:43.088872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.866 qpair failed and we were unable to recover it. 00:34:12.866 [2024-07-26 01:16:43.089003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.866 [2024-07-26 01:16:43.089029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.866 qpair failed and we were unable to recover it. 00:34:12.866 [2024-07-26 01:16:43.089186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.866 [2024-07-26 01:16:43.089213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.866 qpair failed and we were unable to recover it. 00:34:12.866 [2024-07-26 01:16:43.089363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.866 [2024-07-26 01:16:43.089406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.866 qpair failed and we were unable to recover it. 00:34:12.866 [2024-07-26 01:16:43.089547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.866 [2024-07-26 01:16:43.089577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.866 qpair failed and we were unable to recover it. 00:34:12.866 [2024-07-26 01:16:43.089737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.866 [2024-07-26 01:16:43.089763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.866 qpair failed and we were unable to recover it. 00:34:12.866 [2024-07-26 01:16:43.089888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.866 [2024-07-26 01:16:43.089915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.866 qpair failed and we were unable to recover it. 00:34:12.866 [2024-07-26 01:16:43.090018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.866 [2024-07-26 01:16:43.090044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.866 qpair failed and we were unable to recover it. 00:34:12.866 [2024-07-26 01:16:43.090194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.866 [2024-07-26 01:16:43.090221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.866 qpair failed and we were unable to recover it. 00:34:12.866 [2024-07-26 01:16:43.090382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.866 [2024-07-26 01:16:43.090425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.866 qpair failed and we were unable to recover it. 00:34:12.866 [2024-07-26 01:16:43.090604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.866 [2024-07-26 01:16:43.090635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.866 qpair failed and we were unable to recover it. 00:34:12.866 [2024-07-26 01:16:43.090789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.866 [2024-07-26 01:16:43.090819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.866 qpair failed and we were unable to recover it. 00:34:12.866 [2024-07-26 01:16:43.090994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.866 [2024-07-26 01:16:43.091023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.866 qpair failed and we were unable to recover it. 00:34:12.866 [2024-07-26 01:16:43.091195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.866 [2024-07-26 01:16:43.091222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.866 qpair failed and we were unable to recover it. 00:34:12.866 [2024-07-26 01:16:43.091362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.866 [2024-07-26 01:16:43.091389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.866 qpair failed and we were unable to recover it. 00:34:12.866 [2024-07-26 01:16:43.091521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.866 [2024-07-26 01:16:43.091548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.866 qpair failed and we were unable to recover it. 00:34:12.866 [2024-07-26 01:16:43.091741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.866 [2024-07-26 01:16:43.091787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.866 qpair failed and we were unable to recover it. 00:34:12.866 [2024-07-26 01:16:43.091938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.866 [2024-07-26 01:16:43.091968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.866 qpair failed and we were unable to recover it. 00:34:12.866 [2024-07-26 01:16:43.092188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.866 [2024-07-26 01:16:43.092216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.866 qpair failed and we were unable to recover it. 00:34:12.866 [2024-07-26 01:16:43.092341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.866 [2024-07-26 01:16:43.092371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.866 qpair failed and we were unable to recover it. 00:34:12.866 [2024-07-26 01:16:43.092544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.866 [2024-07-26 01:16:43.092573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.866 qpair failed and we were unable to recover it. 00:34:12.866 [2024-07-26 01:16:43.092718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.866 [2024-07-26 01:16:43.092748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.866 qpair failed and we were unable to recover it. 00:34:12.866 [2024-07-26 01:16:43.092874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.866 [2024-07-26 01:16:43.092902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.866 qpair failed and we were unable to recover it. 00:34:12.866 [2024-07-26 01:16:43.093041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.866 [2024-07-26 01:16:43.093082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.866 qpair failed and we were unable to recover it. 00:34:12.866 [2024-07-26 01:16:43.093282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.866 [2024-07-26 01:16:43.093311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.866 qpair failed and we were unable to recover it. 00:34:12.866 [2024-07-26 01:16:43.093440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.866 [2024-07-26 01:16:43.093469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.866 qpair failed and we were unable to recover it. 00:34:12.866 [2024-07-26 01:16:43.093649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.866 [2024-07-26 01:16:43.093675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.866 qpair failed and we were unable to recover it. 00:34:12.866 [2024-07-26 01:16:43.093891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.866 [2024-07-26 01:16:43.093919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.866 qpair failed and we were unable to recover it. 00:34:12.866 [2024-07-26 01:16:43.094067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.866 [2024-07-26 01:16:43.094097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.866 qpair failed and we were unable to recover it. 00:34:12.866 [2024-07-26 01:16:43.094242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.866 [2024-07-26 01:16:43.094272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.867 qpair failed and we were unable to recover it. 00:34:12.867 [2024-07-26 01:16:43.094429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.867 [2024-07-26 01:16:43.094456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.867 qpair failed and we were unable to recover it. 00:34:12.867 [2024-07-26 01:16:43.094592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.867 [2024-07-26 01:16:43.094635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.867 qpair failed and we were unable to recover it. 00:34:12.867 [2024-07-26 01:16:43.094778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.867 [2024-07-26 01:16:43.094807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.867 qpair failed and we were unable to recover it. 00:34:12.867 [2024-07-26 01:16:43.094919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.867 [2024-07-26 01:16:43.094964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.867 qpair failed and we were unable to recover it. 00:34:12.867 [2024-07-26 01:16:43.095127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.867 [2024-07-26 01:16:43.095153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.867 qpair failed and we were unable to recover it. 00:34:12.867 [2024-07-26 01:16:43.095305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.867 [2024-07-26 01:16:43.095334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.867 qpair failed and we were unable to recover it. 00:34:12.867 [2024-07-26 01:16:43.095483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.867 [2024-07-26 01:16:43.095512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.867 qpair failed and we were unable to recover it. 00:34:12.867 [2024-07-26 01:16:43.095666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.867 [2024-07-26 01:16:43.095697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.867 qpair failed and we were unable to recover it. 00:34:12.867 [2024-07-26 01:16:43.095914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.867 [2024-07-26 01:16:43.095940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.867 qpair failed and we were unable to recover it. 00:34:12.867 [2024-07-26 01:16:43.096130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.867 [2024-07-26 01:16:43.096160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.867 qpair failed and we were unable to recover it. 00:34:12.867 [2024-07-26 01:16:43.096288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.867 [2024-07-26 01:16:43.096318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.867 qpair failed and we were unable to recover it. 00:34:12.867 [2024-07-26 01:16:43.096495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.867 [2024-07-26 01:16:43.096525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.867 qpair failed and we were unable to recover it. 00:34:12.867 [2024-07-26 01:16:43.096681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.867 [2024-07-26 01:16:43.096708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.867 qpair failed and we were unable to recover it. 00:34:12.867 [2024-07-26 01:16:43.096848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.867 [2024-07-26 01:16:43.096875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.867 qpair failed and we were unable to recover it. 00:34:12.867 [2024-07-26 01:16:43.097033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.867 [2024-07-26 01:16:43.097084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.867 qpair failed and we were unable to recover it. 00:34:12.867 [2024-07-26 01:16:43.097236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.867 [2024-07-26 01:16:43.097265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.867 qpair failed and we were unable to recover it. 00:34:12.867 [2024-07-26 01:16:43.097421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.867 [2024-07-26 01:16:43.097448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.867 qpair failed and we were unable to recover it. 00:34:12.867 [2024-07-26 01:16:43.097577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.867 [2024-07-26 01:16:43.097620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.867 qpair failed and we were unable to recover it. 00:34:12.867 [2024-07-26 01:16:43.097763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.867 [2024-07-26 01:16:43.097794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.867 qpair failed and we were unable to recover it. 00:34:12.867 [2024-07-26 01:16:43.097974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.867 [2024-07-26 01:16:43.098004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.867 qpair failed and we were unable to recover it. 00:34:12.867 [2024-07-26 01:16:43.098145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.867 [2024-07-26 01:16:43.098176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.867 qpair failed and we were unable to recover it. 00:34:12.867 [2024-07-26 01:16:43.098292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.867 [2024-07-26 01:16:43.098318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.867 qpair failed and we were unable to recover it. 00:34:12.867 [2024-07-26 01:16:43.098450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.867 [2024-07-26 01:16:43.098476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.867 qpair failed and we were unable to recover it. 00:34:12.867 [2024-07-26 01:16:43.098659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.867 [2024-07-26 01:16:43.098688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.867 qpair failed and we were unable to recover it. 00:34:12.867 [2024-07-26 01:16:43.098876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.867 [2024-07-26 01:16:43.098903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.867 qpair failed and we were unable to recover it. 00:34:12.867 [2024-07-26 01:16:43.099056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.867 [2024-07-26 01:16:43.099093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.867 qpair failed and we were unable to recover it. 00:34:12.867 [2024-07-26 01:16:43.099208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.867 [2024-07-26 01:16:43.099252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.867 qpair failed and we were unable to recover it. 00:34:12.867 [2024-07-26 01:16:43.099395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.867 [2024-07-26 01:16:43.099421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.867 qpair failed and we were unable to recover it. 00:34:12.867 [2024-07-26 01:16:43.099632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.867 [2024-07-26 01:16:43.099658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.867 qpair failed and we were unable to recover it. 00:34:12.867 [2024-07-26 01:16:43.099846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.867 [2024-07-26 01:16:43.099875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.867 qpair failed and we were unable to recover it. 00:34:12.867 [2024-07-26 01:16:43.100016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.867 [2024-07-26 01:16:43.100045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.867 qpair failed and we were unable to recover it. 00:34:12.867 [2024-07-26 01:16:43.100209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.867 [2024-07-26 01:16:43.100236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.867 qpair failed and we were unable to recover it. 00:34:12.867 [2024-07-26 01:16:43.100376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.867 [2024-07-26 01:16:43.100403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.867 qpair failed and we were unable to recover it. 00:34:12.867 [2024-07-26 01:16:43.100591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.867 [2024-07-26 01:16:43.100620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.867 qpair failed and we were unable to recover it. 00:34:12.867 [2024-07-26 01:16:43.100787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.867 [2024-07-26 01:16:43.100814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.867 qpair failed and we were unable to recover it. 00:34:12.867 [2024-07-26 01:16:43.100927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.867 [2024-07-26 01:16:43.100954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.867 qpair failed and we were unable to recover it. 00:34:12.867 [2024-07-26 01:16:43.101091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.868 [2024-07-26 01:16:43.101119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.868 qpair failed and we were unable to recover it. 00:34:12.868 [2024-07-26 01:16:43.101238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.868 [2024-07-26 01:16:43.101266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.868 qpair failed and we were unable to recover it. 00:34:12.868 [2024-07-26 01:16:43.101472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.868 [2024-07-26 01:16:43.101543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.868 qpair failed and we were unable to recover it. 00:34:12.868 [2024-07-26 01:16:43.101716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.868 [2024-07-26 01:16:43.101743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.868 qpair failed and we were unable to recover it. 00:34:12.868 [2024-07-26 01:16:43.101915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.868 [2024-07-26 01:16:43.101942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.868 qpair failed and we were unable to recover it. 00:34:12.868 [2024-07-26 01:16:43.102092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.868 [2024-07-26 01:16:43.102122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.868 qpair failed and we were unable to recover it. 00:34:12.868 [2024-07-26 01:16:43.102294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.868 [2024-07-26 01:16:43.102324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.868 qpair failed and we were unable to recover it. 00:34:12.868 [2024-07-26 01:16:43.102497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.868 [2024-07-26 01:16:43.102526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.868 qpair failed and we were unable to recover it. 00:34:12.868 [2024-07-26 01:16:43.102661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.868 [2024-07-26 01:16:43.102687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.868 qpair failed and we were unable to recover it. 00:34:12.868 [2024-07-26 01:16:43.102822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.868 [2024-07-26 01:16:43.102855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.868 qpair failed and we were unable to recover it. 00:34:12.868 [2024-07-26 01:16:43.103031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.868 [2024-07-26 01:16:43.103067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.868 qpair failed and we were unable to recover it. 00:34:12.868 [2024-07-26 01:16:43.103224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.868 [2024-07-26 01:16:43.103254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.868 qpair failed and we were unable to recover it. 00:34:12.868 [2024-07-26 01:16:43.103436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.868 [2024-07-26 01:16:43.103463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.868 qpair failed and we were unable to recover it. 00:34:12.868 [2024-07-26 01:16:43.103572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.868 [2024-07-26 01:16:43.103600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.868 qpair failed and we were unable to recover it. 00:34:12.868 [2024-07-26 01:16:43.103774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.868 [2024-07-26 01:16:43.103801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.868 qpair failed and we were unable to recover it. 00:34:12.868 [2024-07-26 01:16:43.103945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.868 [2024-07-26 01:16:43.103972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.868 qpair failed and we were unable to recover it. 00:34:12.868 [2024-07-26 01:16:43.104145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.868 [2024-07-26 01:16:43.104172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.868 qpair failed and we were unable to recover it. 00:34:12.868 [2024-07-26 01:16:43.104299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.868 [2024-07-26 01:16:43.104338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.868 qpair failed and we were unable to recover it. 00:34:12.868 [2024-07-26 01:16:43.104502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.868 [2024-07-26 01:16:43.104531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.868 qpair failed and we were unable to recover it. 00:34:12.868 [2024-07-26 01:16:43.104661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.868 [2024-07-26 01:16:43.104688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.868 qpair failed and we were unable to recover it. 00:34:12.868 [2024-07-26 01:16:43.104842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.868 [2024-07-26 01:16:43.104870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.868 qpair failed and we were unable to recover it. 00:34:12.868 [2024-07-26 01:16:43.105008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.868 [2024-07-26 01:16:43.105052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.868 qpair failed and we were unable to recover it. 00:34:12.868 [2024-07-26 01:16:43.105181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.868 [2024-07-26 01:16:43.105211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.868 qpair failed and we were unable to recover it. 00:34:12.868 [2024-07-26 01:16:43.105377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.868 [2024-07-26 01:16:43.105404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.868 qpair failed and we were unable to recover it. 00:34:12.868 [2024-07-26 01:16:43.105540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.868 [2024-07-26 01:16:43.105570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.868 qpair failed and we were unable to recover it. 00:34:12.868 [2024-07-26 01:16:43.105702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.868 [2024-07-26 01:16:43.105739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.868 qpair failed and we were unable to recover it. 00:34:12.868 [2024-07-26 01:16:43.105854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.868 [2024-07-26 01:16:43.105881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.868 qpair failed and we were unable to recover it. 00:34:12.868 [2024-07-26 01:16:43.106075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.868 [2024-07-26 01:16:43.106103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.868 qpair failed and we were unable to recover it. 00:34:12.868 [2024-07-26 01:16:43.106212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.868 [2024-07-26 01:16:43.106240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.868 qpair failed and we were unable to recover it. 00:34:12.868 [2024-07-26 01:16:43.106404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.868 [2024-07-26 01:16:43.106448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.868 qpair failed and we were unable to recover it. 00:34:12.868 [2024-07-26 01:16:43.106559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.868 [2024-07-26 01:16:43.106589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.868 qpair failed and we were unable to recover it. 00:34:12.868 [2024-07-26 01:16:43.106775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.868 [2024-07-26 01:16:43.106801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.868 qpair failed and we were unable to recover it. 00:34:12.868 [2024-07-26 01:16:43.106966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.868 [2024-07-26 01:16:43.106997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.868 qpair failed and we were unable to recover it. 00:34:12.868 [2024-07-26 01:16:43.107161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.868 [2024-07-26 01:16:43.107189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.868 qpair failed and we were unable to recover it. 00:34:12.868 [2024-07-26 01:16:43.107300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.868 [2024-07-26 01:16:43.107327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.868 qpair failed and we were unable to recover it. 00:34:12.868 [2024-07-26 01:16:43.107561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.868 [2024-07-26 01:16:43.107590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.868 qpair failed and we were unable to recover it. 00:34:12.868 [2024-07-26 01:16:43.107778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.868 [2024-07-26 01:16:43.107805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.868 qpair failed and we were unable to recover it. 00:34:12.868 [2024-07-26 01:16:43.107924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.868 [2024-07-26 01:16:43.107952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.869 qpair failed and we were unable to recover it. 00:34:12.869 [2024-07-26 01:16:43.108082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.869 [2024-07-26 01:16:43.108110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.869 qpair failed and we were unable to recover it. 00:34:12.869 [2024-07-26 01:16:43.108289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.869 [2024-07-26 01:16:43.108317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.869 qpair failed and we were unable to recover it. 00:34:12.869 [2024-07-26 01:16:43.108458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.869 [2024-07-26 01:16:43.108484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.869 qpair failed and we were unable to recover it. 00:34:12.869 [2024-07-26 01:16:43.108663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.869 [2024-07-26 01:16:43.108693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.869 qpair failed and we were unable to recover it. 00:34:12.869 [2024-07-26 01:16:43.108839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.869 [2024-07-26 01:16:43.108868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.869 qpair failed and we were unable to recover it. 00:34:12.869 [2024-07-26 01:16:43.109041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.869 [2024-07-26 01:16:43.109093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.869 qpair failed and we were unable to recover it. 00:34:12.869 [2024-07-26 01:16:43.109251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.869 [2024-07-26 01:16:43.109281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.869 qpair failed and we were unable to recover it. 00:34:12.869 [2024-07-26 01:16:43.109391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.869 [2024-07-26 01:16:43.109418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.869 qpair failed and we were unable to recover it. 00:34:12.869 [2024-07-26 01:16:43.109591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.869 [2024-07-26 01:16:43.109619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.869 qpair failed and we were unable to recover it. 00:34:12.869 [2024-07-26 01:16:43.109750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.869 [2024-07-26 01:16:43.109777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.869 qpair failed and we were unable to recover it. 00:34:12.869 [2024-07-26 01:16:43.109927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.869 [2024-07-26 01:16:43.109955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.869 qpair failed and we were unable to recover it. 00:34:12.869 [2024-07-26 01:16:43.110168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.869 [2024-07-26 01:16:43.110197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.869 qpair failed and we were unable to recover it. 00:34:12.869 [2024-07-26 01:16:43.110425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.869 [2024-07-26 01:16:43.110475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.869 qpair failed and we were unable to recover it. 00:34:12.869 [2024-07-26 01:16:43.110643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.869 [2024-07-26 01:16:43.110677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.869 qpair failed and we were unable to recover it. 00:34:12.869 [2024-07-26 01:16:43.110839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.869 [2024-07-26 01:16:43.110866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.869 qpair failed and we were unable to recover it. 00:34:12.869 [2024-07-26 01:16:43.111080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.869 [2024-07-26 01:16:43.111110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.869 qpair failed and we were unable to recover it. 00:34:12.869 [2024-07-26 01:16:43.111273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.869 [2024-07-26 01:16:43.111301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.869 qpair failed and we were unable to recover it. 00:34:12.869 [2024-07-26 01:16:43.111434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.869 [2024-07-26 01:16:43.111462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.869 qpair failed and we were unable to recover it. 00:34:12.869 [2024-07-26 01:16:43.111606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.869 [2024-07-26 01:16:43.111634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.869 qpair failed and we were unable to recover it. 00:34:12.869 [2024-07-26 01:16:43.111824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.869 [2024-07-26 01:16:43.111854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.869 qpair failed and we were unable to recover it. 00:34:12.869 [2024-07-26 01:16:43.112028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.869 [2024-07-26 01:16:43.112085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.869 qpair failed and we were unable to recover it. 00:34:12.869 [2024-07-26 01:16:43.112265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.869 [2024-07-26 01:16:43.112295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.869 qpair failed and we were unable to recover it. 00:34:12.869 [2024-07-26 01:16:43.112459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.869 [2024-07-26 01:16:43.112486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.869 qpair failed and we were unable to recover it. 00:34:12.869 [2024-07-26 01:16:43.112611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.869 [2024-07-26 01:16:43.112639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.869 qpair failed and we were unable to recover it. 00:34:12.869 [2024-07-26 01:16:43.112753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.869 [2024-07-26 01:16:43.112781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.869 qpair failed and we were unable to recover it. 00:34:12.869 [2024-07-26 01:16:43.112928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.869 [2024-07-26 01:16:43.112957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.869 qpair failed and we were unable to recover it. 00:34:12.869 [2024-07-26 01:16:43.113108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.869 [2024-07-26 01:16:43.113143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.869 qpair failed and we were unable to recover it. 00:34:12.869 [2024-07-26 01:16:43.113286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.869 [2024-07-26 01:16:43.113314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.869 qpair failed and we were unable to recover it. 00:34:12.869 [2024-07-26 01:16:43.113504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.869 [2024-07-26 01:16:43.113534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.869 qpair failed and we were unable to recover it. 00:34:12.869 [2024-07-26 01:16:43.113655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.869 [2024-07-26 01:16:43.113690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.869 qpair failed and we were unable to recover it. 00:34:12.869 [2024-07-26 01:16:43.113859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.869 [2024-07-26 01:16:43.113886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.869 qpair failed and we were unable to recover it. 00:34:12.869 [2024-07-26 01:16:43.114047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.869 [2024-07-26 01:16:43.114103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.869 qpair failed and we were unable to recover it. 00:34:12.869 [2024-07-26 01:16:43.114263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.869 [2024-07-26 01:16:43.114293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.869 qpair failed and we were unable to recover it. 00:34:12.869 [2024-07-26 01:16:43.114415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.869 [2024-07-26 01:16:43.114445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.869 qpair failed and we were unable to recover it. 00:34:12.869 [2024-07-26 01:16:43.114597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.869 [2024-07-26 01:16:43.114630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.869 qpair failed and we were unable to recover it. 00:34:12.869 [2024-07-26 01:16:43.114766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.869 [2024-07-26 01:16:43.114796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.869 qpair failed and we were unable to recover it. 00:34:12.869 [2024-07-26 01:16:43.114919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.869 [2024-07-26 01:16:43.114948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.869 qpair failed and we were unable to recover it. 00:34:12.870 [2024-07-26 01:16:43.115095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.870 [2024-07-26 01:16:43.115125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.870 qpair failed and we were unable to recover it. 00:34:12.870 [2024-07-26 01:16:43.115342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.870 [2024-07-26 01:16:43.115369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.870 qpair failed and we were unable to recover it. 00:34:12.870 [2024-07-26 01:16:43.115557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.870 [2024-07-26 01:16:43.115587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.870 qpair failed and we were unable to recover it. 00:34:12.870 [2024-07-26 01:16:43.115748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.870 [2024-07-26 01:16:43.115779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.870 qpair failed and we were unable to recover it. 00:34:12.870 [2024-07-26 01:16:43.115950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.870 [2024-07-26 01:16:43.115978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.870 qpair failed and we were unable to recover it. 00:34:12.870 [2024-07-26 01:16:43.116113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.870 [2024-07-26 01:16:43.116141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.870 qpair failed and we were unable to recover it. 00:34:12.870 [2024-07-26 01:16:43.116255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.870 [2024-07-26 01:16:43.116298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.870 qpair failed and we were unable to recover it. 00:34:12.870 [2024-07-26 01:16:43.116497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.870 [2024-07-26 01:16:43.116546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.870 qpair failed and we were unable to recover it. 00:34:12.870 [2024-07-26 01:16:43.116719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.870 [2024-07-26 01:16:43.116749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.870 qpair failed and we were unable to recover it. 00:34:12.870 [2024-07-26 01:16:43.116931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.870 [2024-07-26 01:16:43.116959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.870 qpair failed and we were unable to recover it. 00:34:12.870 [2024-07-26 01:16:43.117070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.870 [2024-07-26 01:16:43.117096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.870 qpair failed and we were unable to recover it. 00:34:12.870 [2024-07-26 01:16:43.117256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.870 [2024-07-26 01:16:43.117307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.870 qpair failed and we were unable to recover it. 00:34:12.870 [2024-07-26 01:16:43.117475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.870 [2024-07-26 01:16:43.117502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.870 qpair failed and we were unable to recover it. 00:34:12.870 [2024-07-26 01:16:43.117681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.870 [2024-07-26 01:16:43.117707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.870 qpair failed and we were unable to recover it. 00:34:12.870 [2024-07-26 01:16:43.117813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.870 [2024-07-26 01:16:43.117840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.870 qpair failed and we were unable to recover it. 00:34:12.870 [2024-07-26 01:16:43.117973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.870 [2024-07-26 01:16:43.118009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.870 qpair failed and we were unable to recover it. 00:34:12.870 [2024-07-26 01:16:43.118172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.870 [2024-07-26 01:16:43.118201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.870 qpair failed and we were unable to recover it. 00:34:12.870 [2024-07-26 01:16:43.118331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.870 [2024-07-26 01:16:43.118357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.870 qpair failed and we were unable to recover it. 00:34:12.870 [2024-07-26 01:16:43.118493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.870 [2024-07-26 01:16:43.118520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.870 qpair failed and we were unable to recover it. 00:34:12.870 [2024-07-26 01:16:43.118683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.870 [2024-07-26 01:16:43.118712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.870 qpair failed and we were unable to recover it. 00:34:12.870 [2024-07-26 01:16:43.118829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.870 [2024-07-26 01:16:43.118859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.870 qpair failed and we were unable to recover it. 00:34:12.870 [2024-07-26 01:16:43.119022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.870 [2024-07-26 01:16:43.119048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.870 qpair failed and we were unable to recover it. 00:34:12.870 [2024-07-26 01:16:43.119191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.870 [2024-07-26 01:16:43.119219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.870 qpair failed and we were unable to recover it. 00:34:12.870 [2024-07-26 01:16:43.119384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.870 [2024-07-26 01:16:43.119414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.870 qpair failed and we were unable to recover it. 00:34:12.870 [2024-07-26 01:16:43.119546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.870 [2024-07-26 01:16:43.119573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.870 qpair failed and we were unable to recover it. 00:34:12.870 [2024-07-26 01:16:43.119735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.870 [2024-07-26 01:16:43.119762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.870 qpair failed and we were unable to recover it. 00:34:12.870 [2024-07-26 01:16:43.119899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.870 [2024-07-26 01:16:43.119927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.870 qpair failed and we were unable to recover it. 00:34:12.870 [2024-07-26 01:16:43.120106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.870 [2024-07-26 01:16:43.120137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.870 qpair failed and we were unable to recover it. 00:34:12.870 [2024-07-26 01:16:43.120253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.870 [2024-07-26 01:16:43.120293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.870 qpair failed and we were unable to recover it. 00:34:12.870 [2024-07-26 01:16:43.120484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.870 [2024-07-26 01:16:43.120516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.870 qpair failed and we were unable to recover it. 00:34:12.870 [2024-07-26 01:16:43.120664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.870 [2024-07-26 01:16:43.120694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.870 qpair failed and we were unable to recover it. 00:34:12.870 [2024-07-26 01:16:43.120840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.870 [2024-07-26 01:16:43.120870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.870 qpair failed and we were unable to recover it. 00:34:12.870 [2024-07-26 01:16:43.121030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.870 [2024-07-26 01:16:43.121076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.870 qpair failed and we were unable to recover it. 00:34:12.870 [2024-07-26 01:16:43.121243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.870 [2024-07-26 01:16:43.121270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.870 qpair failed and we were unable to recover it. 00:34:12.870 [2024-07-26 01:16:43.121503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.870 [2024-07-26 01:16:43.121534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.870 qpair failed and we were unable to recover it. 00:34:12.870 [2024-07-26 01:16:43.121721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.870 [2024-07-26 01:16:43.121750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.870 qpair failed and we were unable to recover it. 00:34:12.870 [2024-07-26 01:16:43.121926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.870 [2024-07-26 01:16:43.121956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.870 qpair failed and we were unable to recover it. 00:34:12.871 [2024-07-26 01:16:43.122106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.871 [2024-07-26 01:16:43.122134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.871 qpair failed and we were unable to recover it. 00:34:12.871 [2024-07-26 01:16:43.122325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.871 [2024-07-26 01:16:43.122355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.871 qpair failed and we were unable to recover it. 00:34:12.871 [2024-07-26 01:16:43.122512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.871 [2024-07-26 01:16:43.122566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.871 qpair failed and we were unable to recover it. 00:34:12.871 [2024-07-26 01:16:43.122707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.871 [2024-07-26 01:16:43.122738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.871 qpair failed and we were unable to recover it. 00:34:12.871 [2024-07-26 01:16:43.122875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.871 [2024-07-26 01:16:43.122913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.871 qpair failed and we were unable to recover it. 00:34:12.871 [2024-07-26 01:16:43.123084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.871 [2024-07-26 01:16:43.123112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.871 qpair failed and we were unable to recover it. 00:34:12.871 [2024-07-26 01:16:43.123281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.871 [2024-07-26 01:16:43.123311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.871 qpair failed and we were unable to recover it. 00:34:12.871 [2024-07-26 01:16:43.123487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.871 [2024-07-26 01:16:43.123516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.871 qpair failed and we were unable to recover it. 00:34:12.871 [2024-07-26 01:16:43.123667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.871 [2024-07-26 01:16:43.123704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.871 qpair failed and we were unable to recover it. 00:34:12.871 [2024-07-26 01:16:43.123846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.871 [2024-07-26 01:16:43.123874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.871 qpair failed and we were unable to recover it. 00:34:12.871 [2024-07-26 01:16:43.124012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.871 [2024-07-26 01:16:43.124056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.871 qpair failed and we were unable to recover it. 00:34:12.871 [2024-07-26 01:16:43.124251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.871 [2024-07-26 01:16:43.124282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.871 qpair failed and we were unable to recover it. 00:34:12.871 [2024-07-26 01:16:43.124461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.871 [2024-07-26 01:16:43.124487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.871 qpair failed and we were unable to recover it. 00:34:12.871 [2024-07-26 01:16:43.124669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.871 [2024-07-26 01:16:43.124699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.871 qpair failed and we were unable to recover it. 00:34:12.871 [2024-07-26 01:16:43.124882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.871 [2024-07-26 01:16:43.124910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.871 qpair failed and we were unable to recover it. 00:34:12.871 [2024-07-26 01:16:43.125050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.871 [2024-07-26 01:16:43.125110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.871 qpair failed and we were unable to recover it. 00:34:12.871 [2024-07-26 01:16:43.125247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.871 [2024-07-26 01:16:43.125275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.871 qpair failed and we were unable to recover it. 00:34:12.871 [2024-07-26 01:16:43.125412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.871 [2024-07-26 01:16:43.125439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.871 qpair failed and we were unable to recover it. 00:34:12.871 [2024-07-26 01:16:43.125567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.871 [2024-07-26 01:16:43.125596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.871 qpair failed and we were unable to recover it. 00:34:12.871 [2024-07-26 01:16:43.125818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.871 [2024-07-26 01:16:43.125844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.871 qpair failed and we were unable to recover it. 00:34:12.871 [2024-07-26 01:16:43.126021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.871 [2024-07-26 01:16:43.126054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.871 qpair failed and we were unable to recover it. 00:34:12.871 [2024-07-26 01:16:43.126248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.871 [2024-07-26 01:16:43.126278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.871 qpair failed and we were unable to recover it. 00:34:12.871 [2024-07-26 01:16:43.126504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.871 [2024-07-26 01:16:43.126552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.871 qpair failed and we were unable to recover it. 00:34:12.871 [2024-07-26 01:16:43.126733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.871 [2024-07-26 01:16:43.126763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.871 qpair failed and we were unable to recover it. 00:34:12.871 [2024-07-26 01:16:43.126983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.871 [2024-07-26 01:16:43.127011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.871 qpair failed and we were unable to recover it. 00:34:12.871 [2024-07-26 01:16:43.127116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.871 [2024-07-26 01:16:43.127142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.871 qpair failed and we were unable to recover it. 00:34:12.871 [2024-07-26 01:16:43.127258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.871 [2024-07-26 01:16:43.127286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.871 qpair failed and we were unable to recover it. 00:34:12.871 [2024-07-26 01:16:43.127441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.871 [2024-07-26 01:16:43.127471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.871 qpair failed and we were unable to recover it. 00:34:12.871 [2024-07-26 01:16:43.127602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.871 [2024-07-26 01:16:43.127628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.871 qpair failed and we were unable to recover it. 00:34:12.871 [2024-07-26 01:16:43.127788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.871 [2024-07-26 01:16:43.127815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.871 qpair failed and we were unable to recover it. 00:34:12.871 [2024-07-26 01:16:43.127975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.871 [2024-07-26 01:16:43.128005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.872 qpair failed and we were unable to recover it. 00:34:12.872 [2024-07-26 01:16:43.128145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.872 [2024-07-26 01:16:43.128174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.872 qpair failed and we were unable to recover it. 00:34:12.872 [2024-07-26 01:16:43.128326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.872 [2024-07-26 01:16:43.128359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.872 qpair failed and we were unable to recover it. 00:34:12.872 [2024-07-26 01:16:43.128516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.872 [2024-07-26 01:16:43.128546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.872 qpair failed and we were unable to recover it. 00:34:12.872 [2024-07-26 01:16:43.128695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.872 [2024-07-26 01:16:43.128726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.872 qpair failed and we were unable to recover it. 00:34:12.872 [2024-07-26 01:16:43.128860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.872 [2024-07-26 01:16:43.128893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.872 qpair failed and we were unable to recover it. 00:34:12.872 [2024-07-26 01:16:43.129023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.872 [2024-07-26 01:16:43.129050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.872 qpair failed and we were unable to recover it. 00:34:12.872 [2024-07-26 01:16:43.129216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.872 [2024-07-26 01:16:43.129245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.872 qpair failed and we were unable to recover it. 00:34:12.872 [2024-07-26 01:16:43.129419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.872 [2024-07-26 01:16:43.129458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.872 qpair failed and we were unable to recover it. 00:34:12.872 [2024-07-26 01:16:43.129586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.872 [2024-07-26 01:16:43.129615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.872 qpair failed and we were unable to recover it. 00:34:12.872 [2024-07-26 01:16:43.129798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.872 [2024-07-26 01:16:43.129825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.872 qpair failed and we were unable to recover it. 00:34:12.872 [2024-07-26 01:16:43.129986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.872 [2024-07-26 01:16:43.130016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.872 qpair failed and we were unable to recover it. 00:34:12.872 [2024-07-26 01:16:43.130201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.872 [2024-07-26 01:16:43.130231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.872 qpair failed and we were unable to recover it. 00:34:12.872 [2024-07-26 01:16:43.130455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.872 [2024-07-26 01:16:43.130492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.872 qpair failed and we were unable to recover it. 00:34:12.872 [2024-07-26 01:16:43.130664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.872 [2024-07-26 01:16:43.130691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.872 qpair failed and we were unable to recover it. 00:34:12.872 [2024-07-26 01:16:43.130797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.872 [2024-07-26 01:16:43.130825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.872 qpair failed and we were unable to recover it. 00:34:12.872 [2024-07-26 01:16:43.130943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.872 [2024-07-26 01:16:43.130990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.872 qpair failed and we were unable to recover it. 00:34:12.872 [2024-07-26 01:16:43.131177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.872 [2024-07-26 01:16:43.131208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.872 qpair failed and we were unable to recover it. 00:34:12.872 [2024-07-26 01:16:43.131336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.872 [2024-07-26 01:16:43.131361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.872 qpair failed and we were unable to recover it. 00:34:12.872 [2024-07-26 01:16:43.131472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.872 [2024-07-26 01:16:43.131500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.872 qpair failed and we were unable to recover it. 00:34:12.872 [2024-07-26 01:16:43.131665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.872 [2024-07-26 01:16:43.131711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.872 qpair failed and we were unable to recover it. 00:34:12.872 [2024-07-26 01:16:43.131869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.872 [2024-07-26 01:16:43.131900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.872 qpair failed and we were unable to recover it. 00:34:12.872 [2024-07-26 01:16:43.132035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.872 [2024-07-26 01:16:43.132071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.872 qpair failed and we were unable to recover it. 00:34:12.872 [2024-07-26 01:16:43.132215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.872 [2024-07-26 01:16:43.132242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.872 qpair failed and we were unable to recover it. 00:34:12.872 [2024-07-26 01:16:43.132403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.872 [2024-07-26 01:16:43.132430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.872 qpair failed and we were unable to recover it. 00:34:12.872 [2024-07-26 01:16:43.132658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.872 [2024-07-26 01:16:43.132688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.872 qpair failed and we were unable to recover it. 00:34:12.872 [2024-07-26 01:16:43.132875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.872 [2024-07-26 01:16:43.132903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.872 qpair failed and we were unable to recover it. 00:34:12.872 [2024-07-26 01:16:43.133017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.872 [2024-07-26 01:16:43.133044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.872 qpair failed and we were unable to recover it. 00:34:12.872 [2024-07-26 01:16:43.133219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.872 [2024-07-26 01:16:43.133247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.872 qpair failed and we were unable to recover it. 00:34:12.872 [2024-07-26 01:16:43.133431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.872 [2024-07-26 01:16:43.133461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.872 qpair failed and we were unable to recover it. 00:34:12.872 [2024-07-26 01:16:43.133616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.872 [2024-07-26 01:16:43.133643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.872 qpair failed and we were unable to recover it. 00:34:12.872 [2024-07-26 01:16:43.133779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.872 [2024-07-26 01:16:43.133822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.872 qpair failed and we were unable to recover it. 00:34:12.872 [2024-07-26 01:16:43.134004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.872 [2024-07-26 01:16:43.134043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.872 qpair failed and we were unable to recover it. 00:34:12.872 [2024-07-26 01:16:43.134182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.872 [2024-07-26 01:16:43.134224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.872 qpair failed and we were unable to recover it. 00:34:12.872 [2024-07-26 01:16:43.134394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.873 [2024-07-26 01:16:43.134421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.873 qpair failed and we were unable to recover it. 00:34:12.873 [2024-07-26 01:16:43.134563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.873 [2024-07-26 01:16:43.134606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.873 qpair failed and we were unable to recover it. 00:34:12.873 [2024-07-26 01:16:43.134757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.873 [2024-07-26 01:16:43.134786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.873 qpair failed and we were unable to recover it. 00:34:12.873 [2024-07-26 01:16:43.134896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.873 [2024-07-26 01:16:43.134940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.873 qpair failed and we were unable to recover it. 00:34:12.873 [2024-07-26 01:16:43.135092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.873 [2024-07-26 01:16:43.135120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.873 qpair failed and we were unable to recover it. 00:34:12.873 [2024-07-26 01:16:43.135237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.873 [2024-07-26 01:16:43.135266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.873 qpair failed and we were unable to recover it. 00:34:12.873 [2024-07-26 01:16:43.135378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.873 [2024-07-26 01:16:43.135405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.873 qpair failed and we were unable to recover it. 00:34:12.873 [2024-07-26 01:16:43.135588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.873 [2024-07-26 01:16:43.135618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.873 qpair failed and we were unable to recover it. 00:34:12.873 [2024-07-26 01:16:43.135778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.873 [2024-07-26 01:16:43.135809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.873 qpair failed and we were unable to recover it. 00:34:12.873 [2024-07-26 01:16:43.135965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.873 [2024-07-26 01:16:43.135996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.873 qpair failed and we were unable to recover it. 00:34:12.873 [2024-07-26 01:16:43.136159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.873 [2024-07-26 01:16:43.136187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.873 qpair failed and we were unable to recover it. 00:34:12.873 [2024-07-26 01:16:43.136321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.873 [2024-07-26 01:16:43.136354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.873 qpair failed and we were unable to recover it. 00:34:12.873 [2024-07-26 01:16:43.136493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.873 [2024-07-26 01:16:43.136520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.873 qpair failed and we were unable to recover it. 00:34:12.873 [2024-07-26 01:16:43.136651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.873 [2024-07-26 01:16:43.136696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.873 qpair failed and we were unable to recover it. 00:34:12.873 [2024-07-26 01:16:43.136849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.873 [2024-07-26 01:16:43.136875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.873 qpair failed and we were unable to recover it. 00:34:12.873 [2024-07-26 01:16:43.136975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.873 [2024-07-26 01:16:43.137002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.873 qpair failed and we were unable to recover it. 00:34:12.873 [2024-07-26 01:16:43.137130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.873 [2024-07-26 01:16:43.137157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.873 qpair failed and we were unable to recover it. 00:34:12.873 [2024-07-26 01:16:43.137303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.873 [2024-07-26 01:16:43.137349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.873 qpair failed and we were unable to recover it. 00:34:12.873 [2024-07-26 01:16:43.137486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.873 [2024-07-26 01:16:43.137517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.873 qpair failed and we were unable to recover it. 00:34:12.873 [2024-07-26 01:16:43.137680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.873 [2024-07-26 01:16:43.137707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.873 qpair failed and we were unable to recover it. 00:34:12.873 [2024-07-26 01:16:43.137826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.873 [2024-07-26 01:16:43.137853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.873 qpair failed and we were unable to recover it. 00:34:12.873 [2024-07-26 01:16:43.137981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.873 [2024-07-26 01:16:43.138008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.873 qpair failed and we were unable to recover it. 00:34:12.873 [2024-07-26 01:16:43.138165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.873 [2024-07-26 01:16:43.138195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.873 qpair failed and we were unable to recover it. 00:34:12.873 [2024-07-26 01:16:43.138316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.873 [2024-07-26 01:16:43.138345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.873 qpair failed and we were unable to recover it. 00:34:12.873 [2024-07-26 01:16:43.138513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.873 [2024-07-26 01:16:43.138542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.873 qpair failed and we were unable to recover it. 00:34:12.873 [2024-07-26 01:16:43.138706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.873 [2024-07-26 01:16:43.138733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.873 qpair failed and we were unable to recover it. 00:34:12.873 [2024-07-26 01:16:43.138858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.873 [2024-07-26 01:16:43.138887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.873 qpair failed and we were unable to recover it. 00:34:12.873 [2024-07-26 01:16:43.139069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.873 [2024-07-26 01:16:43.139109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.873 qpair failed and we were unable to recover it. 00:34:12.873 [2024-07-26 01:16:43.139254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.873 [2024-07-26 01:16:43.139283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.873 qpair failed and we were unable to recover it. 00:34:12.873 [2024-07-26 01:16:43.139457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.873 [2024-07-26 01:16:43.139484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.873 qpair failed and we were unable to recover it. 00:34:12.873 [2024-07-26 01:16:43.139645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.873 [2024-07-26 01:16:43.139674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.873 qpair failed and we were unable to recover it. 00:34:12.873 [2024-07-26 01:16:43.139785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.873 [2024-07-26 01:16:43.139815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.873 qpair failed and we were unable to recover it. 00:34:12.873 [2024-07-26 01:16:43.139999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.873 [2024-07-26 01:16:43.140034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.873 qpair failed and we were unable to recover it. 00:34:12.873 [2024-07-26 01:16:43.140163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.873 [2024-07-26 01:16:43.140195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.873 qpair failed and we were unable to recover it. 00:34:12.873 [2024-07-26 01:16:43.140335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.873 [2024-07-26 01:16:43.140369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.873 qpair failed and we were unable to recover it. 00:34:12.873 [2024-07-26 01:16:43.140548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.873 [2024-07-26 01:16:43.140578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.873 qpair failed and we were unable to recover it. 00:34:12.873 [2024-07-26 01:16:43.140730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.874 [2024-07-26 01:16:43.140756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.874 qpair failed and we were unable to recover it. 00:34:12.874 [2024-07-26 01:16:43.140934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.874 [2024-07-26 01:16:43.140962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.874 qpair failed and we were unable to recover it. 00:34:12.874 [2024-07-26 01:16:43.141138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.874 [2024-07-26 01:16:43.141179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.874 qpair failed and we were unable to recover it. 00:34:12.874 [2024-07-26 01:16:43.141347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.874 [2024-07-26 01:16:43.141374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.874 qpair failed and we were unable to recover it. 00:34:12.874 [2024-07-26 01:16:43.141587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.874 [2024-07-26 01:16:43.141614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.874 qpair failed and we were unable to recover it. 00:34:12.874 [2024-07-26 01:16:43.141769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.874 [2024-07-26 01:16:43.141798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.874 qpair failed and we were unable to recover it. 00:34:12.874 [2024-07-26 01:16:43.141945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.874 [2024-07-26 01:16:43.141974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.874 qpair failed and we were unable to recover it. 00:34:12.874 [2024-07-26 01:16:43.142086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.874 [2024-07-26 01:16:43.142129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.874 qpair failed and we were unable to recover it. 00:34:12.874 [2024-07-26 01:16:43.142294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.874 [2024-07-26 01:16:43.142326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.874 qpair failed and we were unable to recover it. 00:34:12.874 [2024-07-26 01:16:43.142472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.874 [2024-07-26 01:16:43.142518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.874 qpair failed and we were unable to recover it. 00:34:12.874 [2024-07-26 01:16:43.142678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.874 [2024-07-26 01:16:43.142705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.874 qpair failed and we were unable to recover it. 00:34:12.874 [2024-07-26 01:16:43.142843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.874 [2024-07-26 01:16:43.142871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.874 qpair failed and we were unable to recover it. 00:34:12.874 [2024-07-26 01:16:43.142989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.874 [2024-07-26 01:16:43.143020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.874 qpair failed and we were unable to recover it. 00:34:12.874 [2024-07-26 01:16:43.143185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.874 [2024-07-26 01:16:43.143212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.874 qpair failed and we were unable to recover it. 00:34:12.874 [2024-07-26 01:16:43.143317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.874 [2024-07-26 01:16:43.143343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.874 qpair failed and we were unable to recover it. 00:34:12.874 [2024-07-26 01:16:43.143480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.874 [2024-07-26 01:16:43.143508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.874 qpair failed and we were unable to recover it. 00:34:12.874 [2024-07-26 01:16:43.143722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.874 [2024-07-26 01:16:43.143749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.874 qpair failed and we were unable to recover it. 00:34:12.874 [2024-07-26 01:16:43.143894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.874 [2024-07-26 01:16:43.143928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.874 qpair failed and we were unable to recover it. 00:34:12.874 [2024-07-26 01:16:43.144051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.874 [2024-07-26 01:16:43.144088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.874 qpair failed and we were unable to recover it. 00:34:12.874 [2024-07-26 01:16:43.144237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.874 [2024-07-26 01:16:43.144267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.874 qpair failed and we were unable to recover it. 00:34:12.874 [2024-07-26 01:16:43.144448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.874 [2024-07-26 01:16:43.144474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.874 qpair failed and we were unable to recover it. 00:34:12.874 [2024-07-26 01:16:43.144587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.874 [2024-07-26 01:16:43.144615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.874 qpair failed and we were unable to recover it. 00:34:12.874 [2024-07-26 01:16:43.144725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.874 [2024-07-26 01:16:43.144753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.874 qpair failed and we were unable to recover it. 00:34:12.874 [2024-07-26 01:16:43.144920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.874 [2024-07-26 01:16:43.144949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.874 qpair failed and we were unable to recover it. 00:34:12.874 [2024-07-26 01:16:43.145099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.874 [2024-07-26 01:16:43.145126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.874 qpair failed and we were unable to recover it. 00:34:12.874 [2024-07-26 01:16:43.145263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.874 [2024-07-26 01:16:43.145290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.874 qpair failed and we were unable to recover it. 00:34:12.874 [2024-07-26 01:16:43.145461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.874 [2024-07-26 01:16:43.145491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.874 qpair failed and we were unable to recover it. 00:34:12.874 [2024-07-26 01:16:43.145653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.874 [2024-07-26 01:16:43.145681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.874 qpair failed and we were unable to recover it. 00:34:12.874 [2024-07-26 01:16:43.145813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.874 [2024-07-26 01:16:43.145843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.874 qpair failed and we were unable to recover it. 00:34:12.874 [2024-07-26 01:16:43.146008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.874 [2024-07-26 01:16:43.146037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.874 qpair failed and we were unable to recover it. 00:34:12.874 [2024-07-26 01:16:43.146166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.874 [2024-07-26 01:16:43.146195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.874 qpair failed and we were unable to recover it. 00:34:12.874 [2024-07-26 01:16:43.146349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.874 [2024-07-26 01:16:43.146378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.874 qpair failed and we were unable to recover it. 00:34:12.874 [2024-07-26 01:16:43.146560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.874 [2024-07-26 01:16:43.146593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.874 qpair failed and we were unable to recover it. 00:34:12.874 [2024-07-26 01:16:43.146754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.874 [2024-07-26 01:16:43.146784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.874 qpair failed and we were unable to recover it. 00:34:12.874 [2024-07-26 01:16:43.146946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.875 [2024-07-26 01:16:43.146974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.875 qpair failed and we were unable to recover it. 00:34:12.875 [2024-07-26 01:16:43.147119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.875 [2024-07-26 01:16:43.147147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.875 qpair failed and we were unable to recover it. 00:34:12.875 [2024-07-26 01:16:43.147289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.875 [2024-07-26 01:16:43.147316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.875 qpair failed and we were unable to recover it. 00:34:12.875 [2024-07-26 01:16:43.147450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.875 [2024-07-26 01:16:43.147478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.875 qpair failed and we were unable to recover it. 00:34:12.875 [2024-07-26 01:16:43.147608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.875 [2024-07-26 01:16:43.147647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.875 qpair failed and we were unable to recover it. 00:34:12.875 [2024-07-26 01:16:43.147820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.875 [2024-07-26 01:16:43.147850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.875 qpair failed and we were unable to recover it. 00:34:12.875 [2024-07-26 01:16:43.147990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.875 [2024-07-26 01:16:43.148018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.875 qpair failed and we were unable to recover it. 00:34:12.875 [2024-07-26 01:16:43.148160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.875 [2024-07-26 01:16:43.148188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.875 qpair failed and we were unable to recover it. 00:34:12.875 [2024-07-26 01:16:43.148326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.875 [2024-07-26 01:16:43.148354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.875 qpair failed and we were unable to recover it. 00:34:12.875 [2024-07-26 01:16:43.148516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.875 [2024-07-26 01:16:43.148559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.875 qpair failed and we were unable to recover it. 00:34:12.875 [2024-07-26 01:16:43.148718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.875 [2024-07-26 01:16:43.148752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.875 qpair failed and we were unable to recover it. 00:34:12.875 [2024-07-26 01:16:43.148939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.875 [2024-07-26 01:16:43.148970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.875 qpair failed and we were unable to recover it. 00:34:12.875 [2024-07-26 01:16:43.149123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.875 [2024-07-26 01:16:43.149155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.875 qpair failed and we were unable to recover it. 00:34:12.875 [2024-07-26 01:16:43.149318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.875 [2024-07-26 01:16:43.149349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.875 qpair failed and we were unable to recover it. 00:34:12.875 [2024-07-26 01:16:43.149504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.875 [2024-07-26 01:16:43.149531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.875 qpair failed and we were unable to recover it. 00:34:12.875 [2024-07-26 01:16:43.149717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.875 [2024-07-26 01:16:43.149752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.875 qpair failed and we were unable to recover it. 00:34:12.875 [2024-07-26 01:16:43.149868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.875 [2024-07-26 01:16:43.149902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.875 qpair failed and we were unable to recover it. 00:34:12.875 [2024-07-26 01:16:43.150068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.875 [2024-07-26 01:16:43.150099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.875 qpair failed and we were unable to recover it. 00:34:12.875 [2024-07-26 01:16:43.150244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.875 [2024-07-26 01:16:43.150277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.875 qpair failed and we were unable to recover it. 00:34:12.875 [2024-07-26 01:16:43.150406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.875 [2024-07-26 01:16:43.150433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.875 qpair failed and we were unable to recover it. 00:34:12.875 [2024-07-26 01:16:43.150545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.875 [2024-07-26 01:16:43.150572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.875 qpair failed and we were unable to recover it. 00:34:12.875 [2024-07-26 01:16:43.150706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.875 [2024-07-26 01:16:43.150733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.875 qpair failed and we were unable to recover it. 00:34:12.875 [2024-07-26 01:16:43.150870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.875 [2024-07-26 01:16:43.150898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.875 qpair failed and we were unable to recover it. 00:34:12.875 [2024-07-26 01:16:43.151008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.875 [2024-07-26 01:16:43.151050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.875 qpair failed and we were unable to recover it. 00:34:12.875 [2024-07-26 01:16:43.151222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.875 [2024-07-26 01:16:43.151252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.875 qpair failed and we were unable to recover it. 00:34:12.875 [2024-07-26 01:16:43.151401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.875 [2024-07-26 01:16:43.151431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.875 qpair failed and we were unable to recover it. 00:34:12.875 [2024-07-26 01:16:43.151615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.875 [2024-07-26 01:16:43.151641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.875 qpair failed and we were unable to recover it. 00:34:12.875 [2024-07-26 01:16:43.151833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.875 [2024-07-26 01:16:43.151874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.875 qpair failed and we were unable to recover it. 00:34:12.875 [2024-07-26 01:16:43.152036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.875 [2024-07-26 01:16:43.152074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.875 qpair failed and we were unable to recover it. 00:34:12.875 [2024-07-26 01:16:43.152224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.875 [2024-07-26 01:16:43.152253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.875 qpair failed and we were unable to recover it. 00:34:12.875 [2024-07-26 01:16:43.152386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.875 [2024-07-26 01:16:43.152413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.875 qpair failed and we were unable to recover it. 00:34:12.875 [2024-07-26 01:16:43.152524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.875 [2024-07-26 01:16:43.152549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.875 qpair failed and we were unable to recover it. 00:34:12.875 [2024-07-26 01:16:43.152694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.875 [2024-07-26 01:16:43.152721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.875 qpair failed and we were unable to recover it. 00:34:12.875 [2024-07-26 01:16:43.152911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.875 [2024-07-26 01:16:43.152940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.875 qpair failed and we were unable to recover it. 00:34:12.875 [2024-07-26 01:16:43.153088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.875 [2024-07-26 01:16:43.153116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.875 qpair failed and we were unable to recover it. 00:34:12.875 [2024-07-26 01:16:43.153255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.875 [2024-07-26 01:16:43.153283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.875 qpair failed and we were unable to recover it. 00:34:12.875 [2024-07-26 01:16:43.153465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.876 [2024-07-26 01:16:43.153495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.876 qpair failed and we were unable to recover it. 00:34:12.876 [2024-07-26 01:16:43.153648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.876 [2024-07-26 01:16:43.153678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.876 qpair failed and we were unable to recover it. 00:34:12.876 [2024-07-26 01:16:43.153814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.876 [2024-07-26 01:16:43.153841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.876 qpair failed and we were unable to recover it. 00:34:12.876 [2024-07-26 01:16:43.153982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.876 [2024-07-26 01:16:43.154010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.876 qpair failed and we were unable to recover it. 00:34:12.876 [2024-07-26 01:16:43.154145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.876 [2024-07-26 01:16:43.154173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.876 qpair failed and we were unable to recover it. 00:34:12.876 [2024-07-26 01:16:43.154419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.876 [2024-07-26 01:16:43.154450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.876 qpair failed and we were unable to recover it. 00:34:12.876 [2024-07-26 01:16:43.154633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.876 [2024-07-26 01:16:43.154660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.876 qpair failed and we were unable to recover it. 00:34:12.876 [2024-07-26 01:16:43.154763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.876 [2024-07-26 01:16:43.154807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.876 qpair failed and we were unable to recover it. 00:34:12.876 [2024-07-26 01:16:43.154929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.876 [2024-07-26 01:16:43.154964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.876 qpair failed and we were unable to recover it. 00:34:12.876 [2024-07-26 01:16:43.155134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.876 [2024-07-26 01:16:43.155166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.876 qpair failed and we were unable to recover it. 00:34:12.876 [2024-07-26 01:16:43.155322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.876 [2024-07-26 01:16:43.155349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.876 qpair failed and we were unable to recover it. 00:34:12.876 [2024-07-26 01:16:43.155464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.876 [2024-07-26 01:16:43.155490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.876 qpair failed and we were unable to recover it. 00:34:12.876 [2024-07-26 01:16:43.155660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.876 [2024-07-26 01:16:43.155687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.876 qpair failed and we were unable to recover it. 00:34:12.876 [2024-07-26 01:16:43.155818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.876 [2024-07-26 01:16:43.155844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.876 qpair failed and we were unable to recover it. 00:34:12.876 [2024-07-26 01:16:43.155994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.876 [2024-07-26 01:16:43.156049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.876 qpair failed and we were unable to recover it. 00:34:12.876 [2024-07-26 01:16:43.156231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.876 [2024-07-26 01:16:43.156258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.876 qpair failed and we were unable to recover it. 00:34:12.876 [2024-07-26 01:16:43.156384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.876 [2024-07-26 01:16:43.156419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.876 qpair failed and we were unable to recover it. 00:34:12.876 [2024-07-26 01:16:43.156595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.876 [2024-07-26 01:16:43.156621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.876 qpair failed and we were unable to recover it. 00:34:12.876 [2024-07-26 01:16:43.156757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.876 [2024-07-26 01:16:43.156784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.876 qpair failed and we were unable to recover it. 00:34:12.876 [2024-07-26 01:16:43.157013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.876 [2024-07-26 01:16:43.157044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.876 qpair failed and we were unable to recover it. 00:34:12.876 [2024-07-26 01:16:43.157213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.876 [2024-07-26 01:16:43.157243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.876 qpair failed and we were unable to recover it. 00:34:12.876 [2024-07-26 01:16:43.157391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.876 [2024-07-26 01:16:43.157421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.876 qpair failed and we were unable to recover it. 00:34:12.876 [2024-07-26 01:16:43.157552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.876 [2024-07-26 01:16:43.157582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.876 qpair failed and we were unable to recover it. 00:34:12.876 [2024-07-26 01:16:43.157725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.876 [2024-07-26 01:16:43.157753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.876 qpair failed and we were unable to recover it. 00:34:12.876 [2024-07-26 01:16:43.157884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.876 [2024-07-26 01:16:43.157911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.876 qpair failed and we were unable to recover it. 00:34:12.876 [2024-07-26 01:16:43.158049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.876 [2024-07-26 01:16:43.158086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.876 qpair failed and we were unable to recover it. 00:34:12.876 [2024-07-26 01:16:43.158236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.876 [2024-07-26 01:16:43.158263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.876 qpair failed and we were unable to recover it. 00:34:12.876 [2024-07-26 01:16:43.158394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.876 [2024-07-26 01:16:43.158421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.876 qpair failed and we were unable to recover it. 00:34:12.876 [2024-07-26 01:16:43.158557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.876 [2024-07-26 01:16:43.158585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.876 qpair failed and we were unable to recover it. 00:34:12.876 [2024-07-26 01:16:43.158697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.877 [2024-07-26 01:16:43.158733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.877 qpair failed and we were unable to recover it. 00:34:12.877 [2024-07-26 01:16:43.158913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.877 [2024-07-26 01:16:43.158941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.877 qpair failed and we were unable to recover it. 00:34:12.877 [2024-07-26 01:16:43.159100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.877 [2024-07-26 01:16:43.159130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.877 qpair failed and we were unable to recover it. 00:34:12.877 [2024-07-26 01:16:43.159247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.877 [2024-07-26 01:16:43.159278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.877 qpair failed and we were unable to recover it. 00:34:12.877 [2024-07-26 01:16:43.159443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.877 [2024-07-26 01:16:43.159470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.877 qpair failed and we were unable to recover it. 00:34:12.877 [2024-07-26 01:16:43.159602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.877 [2024-07-26 01:16:43.159628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.877 qpair failed and we were unable to recover it. 00:34:12.877 [2024-07-26 01:16:43.159770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.877 [2024-07-26 01:16:43.159817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.877 qpair failed and we were unable to recover it. 00:34:12.877 [2024-07-26 01:16:43.159967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.877 [2024-07-26 01:16:43.160000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.877 qpair failed and we were unable to recover it. 00:34:12.877 [2024-07-26 01:16:43.160206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.877 [2024-07-26 01:16:43.160234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.877 qpair failed and we were unable to recover it. 00:34:12.877 [2024-07-26 01:16:43.160393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.877 [2024-07-26 01:16:43.160420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.877 qpair failed and we were unable to recover it. 00:34:12.877 [2024-07-26 01:16:43.160529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.877 [2024-07-26 01:16:43.160555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.877 qpair failed and we were unable to recover it. 00:34:12.877 [2024-07-26 01:16:43.160692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.877 [2024-07-26 01:16:43.160718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.877 qpair failed and we were unable to recover it. 00:34:12.877 [2024-07-26 01:16:43.160893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.877 [2024-07-26 01:16:43.160923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.877 qpair failed and we were unable to recover it. 00:34:12.877 [2024-07-26 01:16:43.161056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.877 [2024-07-26 01:16:43.161097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.877 qpair failed and we were unable to recover it. 00:34:12.877 [2024-07-26 01:16:43.161244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.877 [2024-07-26 01:16:43.161271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.877 qpair failed and we were unable to recover it. 00:34:12.877 [2024-07-26 01:16:43.161408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.877 [2024-07-26 01:16:43.161436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.877 qpair failed and we were unable to recover it. 00:34:12.877 [2024-07-26 01:16:43.161574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.877 [2024-07-26 01:16:43.161603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.877 qpair failed and we were unable to recover it. 00:34:12.877 [2024-07-26 01:16:43.161789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.877 [2024-07-26 01:16:43.161818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.877 qpair failed and we were unable to recover it. 00:34:12.877 [2024-07-26 01:16:43.161949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.877 [2024-07-26 01:16:43.161976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.877 qpair failed and we were unable to recover it. 00:34:12.877 [2024-07-26 01:16:43.162149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.877 [2024-07-26 01:16:43.162178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.877 qpair failed and we were unable to recover it. 00:34:12.877 [2024-07-26 01:16:43.162316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.877 [2024-07-26 01:16:43.162346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.877 qpair failed and we were unable to recover it. 00:34:12.877 [2024-07-26 01:16:43.162498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.877 [2024-07-26 01:16:43.162526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.877 qpair failed and we were unable to recover it. 00:34:12.877 [2024-07-26 01:16:43.162663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.877 [2024-07-26 01:16:43.162691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.877 qpair failed and we were unable to recover it. 00:34:12.877 [2024-07-26 01:16:43.162824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.877 [2024-07-26 01:16:43.162852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.877 qpair failed and we were unable to recover it. 00:34:12.877 [2024-07-26 01:16:43.163024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.877 [2024-07-26 01:16:43.163051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.877 qpair failed and we were unable to recover it. 00:34:12.877 [2024-07-26 01:16:43.163236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.877 [2024-07-26 01:16:43.163263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.877 qpair failed and we were unable to recover it. 00:34:12.877 [2024-07-26 01:16:43.163420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.877 [2024-07-26 01:16:43.163449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.877 qpair failed and we were unable to recover it. 00:34:12.877 [2024-07-26 01:16:43.163665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.877 [2024-07-26 01:16:43.163715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.877 qpair failed and we were unable to recover it. 00:34:12.877 [2024-07-26 01:16:43.163908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.877 [2024-07-26 01:16:43.163939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.877 qpair failed and we were unable to recover it. 00:34:12.877 [2024-07-26 01:16:43.164071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.877 [2024-07-26 01:16:43.164107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.877 qpair failed and we were unable to recover it. 00:34:12.877 [2024-07-26 01:16:43.164250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.877 [2024-07-26 01:16:43.164293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.877 qpair failed and we were unable to recover it. 00:34:12.877 [2024-07-26 01:16:43.164545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.877 [2024-07-26 01:16:43.164595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.877 qpair failed and we were unable to recover it. 00:34:12.877 [2024-07-26 01:16:43.164759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.877 [2024-07-26 01:16:43.164786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.877 qpair failed and we were unable to recover it. 00:34:12.877 [2024-07-26 01:16:43.164895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.877 [2024-07-26 01:16:43.164927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.877 qpair failed and we were unable to recover it. 00:34:12.877 [2024-07-26 01:16:43.165071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.877 [2024-07-26 01:16:43.165116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.877 qpair failed and we were unable to recover it. 00:34:12.878 [2024-07-26 01:16:43.165269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.878 [2024-07-26 01:16:43.165298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.878 qpair failed and we were unable to recover it. 00:34:12.878 [2024-07-26 01:16:43.165415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.878 [2024-07-26 01:16:43.165445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.878 qpair failed and we were unable to recover it. 00:34:12.878 [2024-07-26 01:16:43.165626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.878 [2024-07-26 01:16:43.165653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.878 qpair failed and we were unable to recover it. 00:34:12.878 [2024-07-26 01:16:43.165767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.878 [2024-07-26 01:16:43.165813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.878 qpair failed and we were unable to recover it. 00:34:12.878 [2024-07-26 01:16:43.165970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.878 [2024-07-26 01:16:43.166001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.878 qpair failed and we were unable to recover it. 00:34:12.878 [2024-07-26 01:16:43.166187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.878 [2024-07-26 01:16:43.166214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.878 qpair failed and we were unable to recover it. 00:34:12.878 [2024-07-26 01:16:43.166378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.878 [2024-07-26 01:16:43.166405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.878 qpair failed and we were unable to recover it. 00:34:12.878 [2024-07-26 01:16:43.166576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.878 [2024-07-26 01:16:43.166606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.878 qpair failed and we were unable to recover it. 00:34:12.878 [2024-07-26 01:16:43.166756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.878 [2024-07-26 01:16:43.166786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.878 qpair failed and we were unable to recover it. 00:34:12.878 [2024-07-26 01:16:43.166940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.878 [2024-07-26 01:16:43.166968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.878 qpair failed and we were unable to recover it. 00:34:12.878 [2024-07-26 01:16:43.167136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.878 [2024-07-26 01:16:43.167164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.878 qpair failed and we were unable to recover it. 00:34:12.878 [2024-07-26 01:16:43.167299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.878 [2024-07-26 01:16:43.167326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.878 qpair failed and we were unable to recover it. 00:34:12.878 [2024-07-26 01:16:43.167466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.878 [2024-07-26 01:16:43.167493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.878 qpair failed and we were unable to recover it. 00:34:12.878 [2024-07-26 01:16:43.167624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.878 [2024-07-26 01:16:43.167651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.878 qpair failed and we were unable to recover it. 00:34:12.878 [2024-07-26 01:16:43.167794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.878 [2024-07-26 01:16:43.167821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.878 qpair failed and we were unable to recover it. 00:34:12.878 [2024-07-26 01:16:43.167970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.878 [2024-07-26 01:16:43.168010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.878 qpair failed and we were unable to recover it. 00:34:12.878 [2024-07-26 01:16:43.168185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.878 [2024-07-26 01:16:43.168213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.878 qpair failed and we were unable to recover it. 00:34:12.878 [2024-07-26 01:16:43.168373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.878 [2024-07-26 01:16:43.168400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.878 qpair failed and we were unable to recover it. 00:34:12.878 [2024-07-26 01:16:43.168528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.878 [2024-07-26 01:16:43.168555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.878 qpair failed and we were unable to recover it. 00:34:12.878 [2024-07-26 01:16:43.168687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.878 [2024-07-26 01:16:43.168731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.878 qpair failed and we were unable to recover it. 00:34:12.878 [2024-07-26 01:16:43.168904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.878 [2024-07-26 01:16:43.168934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.878 qpair failed and we were unable to recover it. 00:34:12.878 [2024-07-26 01:16:43.169086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.878 [2024-07-26 01:16:43.169117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.878 qpair failed and we were unable to recover it. 00:34:12.878 [2024-07-26 01:16:43.169290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.878 [2024-07-26 01:16:43.169318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.878 qpair failed and we were unable to recover it. 00:34:12.878 [2024-07-26 01:16:43.169449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.878 [2024-07-26 01:16:43.169492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.878 qpair failed and we were unable to recover it. 00:34:12.878 [2024-07-26 01:16:43.169639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.878 [2024-07-26 01:16:43.169670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.878 qpair failed and we were unable to recover it. 00:34:12.878 [2024-07-26 01:16:43.169825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.878 [2024-07-26 01:16:43.169857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.878 qpair failed and we were unable to recover it. 00:34:12.878 [2024-07-26 01:16:43.170008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.878 [2024-07-26 01:16:43.170035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.878 qpair failed and we were unable to recover it. 00:34:12.878 [2024-07-26 01:16:43.170179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.878 [2024-07-26 01:16:43.170207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.878 qpair failed and we were unable to recover it. 00:34:12.878 [2024-07-26 01:16:43.170347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.878 [2024-07-26 01:16:43.170373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.878 qpair failed and we were unable to recover it. 00:34:12.878 [2024-07-26 01:16:43.170491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.878 [2024-07-26 01:16:43.170517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.878 qpair failed and we were unable to recover it. 00:34:12.878 [2024-07-26 01:16:43.170658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.878 [2024-07-26 01:16:43.170686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.878 qpair failed and we were unable to recover it. 00:34:12.878 [2024-07-26 01:16:43.170795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.878 [2024-07-26 01:16:43.170839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.878 qpair failed and we were unable to recover it. 00:34:12.878 [2024-07-26 01:16:43.171070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.878 [2024-07-26 01:16:43.171100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.878 qpair failed and we were unable to recover it. 00:34:12.878 [2024-07-26 01:16:43.171266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.878 [2024-07-26 01:16:43.171293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.878 qpair failed and we were unable to recover it. 00:34:12.878 [2024-07-26 01:16:43.171402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.878 [2024-07-26 01:16:43.171429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.878 qpair failed and we were unable to recover it. 00:34:12.878 [2024-07-26 01:16:43.171537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.878 [2024-07-26 01:16:43.171564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.878 qpair failed and we were unable to recover it. 00:34:12.879 [2024-07-26 01:16:43.171716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.879 [2024-07-26 01:16:43.171749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.879 qpair failed and we were unable to recover it. 00:34:12.879 [2024-07-26 01:16:43.171912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.879 [2024-07-26 01:16:43.171941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.879 qpair failed and we were unable to recover it. 00:34:12.879 [2024-07-26 01:16:43.172073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.879 [2024-07-26 01:16:43.172105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.879 qpair failed and we were unable to recover it. 00:34:12.879 [2024-07-26 01:16:43.172211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.879 [2024-07-26 01:16:43.172238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.879 qpair failed and we were unable to recover it. 00:34:12.879 [2024-07-26 01:16:43.172360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.879 [2024-07-26 01:16:43.172389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.879 qpair failed and we were unable to recover it. 00:34:12.879 [2024-07-26 01:16:43.172529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.879 [2024-07-26 01:16:43.172559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.879 qpair failed and we were unable to recover it. 00:34:12.879 [2024-07-26 01:16:43.172697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.879 [2024-07-26 01:16:43.172727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.879 qpair failed and we were unable to recover it. 00:34:12.879 [2024-07-26 01:16:43.172898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.879 [2024-07-26 01:16:43.172925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.879 qpair failed and we were unable to recover it. 00:34:12.879 [2024-07-26 01:16:43.173089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.879 [2024-07-26 01:16:43.173119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.879 qpair failed and we were unable to recover it. 00:34:12.879 [2024-07-26 01:16:43.173307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.879 [2024-07-26 01:16:43.173337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.879 qpair failed and we were unable to recover it. 00:34:12.879 [2024-07-26 01:16:43.173513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.879 [2024-07-26 01:16:43.173540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.879 qpair failed and we were unable to recover it. 00:34:12.879 [2024-07-26 01:16:43.173667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.879 [2024-07-26 01:16:43.173722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.879 qpair failed and we were unable to recover it. 00:34:12.879 [2024-07-26 01:16:43.173884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.879 [2024-07-26 01:16:43.173915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.879 qpair failed and we were unable to recover it. 00:34:12.879 [2024-07-26 01:16:43.174082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.879 [2024-07-26 01:16:43.174110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.879 qpair failed and we were unable to recover it. 00:34:12.879 [2024-07-26 01:16:43.174323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.879 [2024-07-26 01:16:43.174350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.879 qpair failed and we were unable to recover it. 00:34:12.879 [2024-07-26 01:16:43.174529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.879 [2024-07-26 01:16:43.174558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.879 qpair failed and we were unable to recover it. 00:34:12.879 [2024-07-26 01:16:43.174786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.879 [2024-07-26 01:16:43.174817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.879 qpair failed and we were unable to recover it. 00:34:12.879 [2024-07-26 01:16:43.175038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.879 [2024-07-26 01:16:43.175076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.879 qpair failed and we were unable to recover it. 00:34:12.879 [2024-07-26 01:16:43.175223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.879 [2024-07-26 01:16:43.175250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.879 qpair failed and we were unable to recover it. 00:34:12.879 [2024-07-26 01:16:43.175382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.879 [2024-07-26 01:16:43.175409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.879 qpair failed and we were unable to recover it. 00:34:12.879 [2024-07-26 01:16:43.175549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.879 [2024-07-26 01:16:43.175576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.879 qpair failed and we were unable to recover it. 00:34:12.879 [2024-07-26 01:16:43.175688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.879 [2024-07-26 01:16:43.175715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.879 qpair failed and we were unable to recover it. 00:34:12.879 [2024-07-26 01:16:43.175858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.879 [2024-07-26 01:16:43.175885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.879 qpair failed and we were unable to recover it. 00:34:12.879 [2024-07-26 01:16:43.176018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.879 [2024-07-26 01:16:43.176046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.879 qpair failed and we were unable to recover it. 00:34:12.879 [2024-07-26 01:16:43.176192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.879 [2024-07-26 01:16:43.176235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.879 qpair failed and we were unable to recover it. 00:34:12.879 [2024-07-26 01:16:43.176376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.879 [2024-07-26 01:16:43.176405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.879 qpair failed and we were unable to recover it. 00:34:12.879 [2024-07-26 01:16:43.176581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.879 [2024-07-26 01:16:43.176608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.879 qpair failed and we were unable to recover it. 00:34:12.879 [2024-07-26 01:16:43.176748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.879 [2024-07-26 01:16:43.176776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.879 qpair failed and we were unable to recover it. 00:34:12.879 [2024-07-26 01:16:43.176911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.879 [2024-07-26 01:16:43.176938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.879 qpair failed and we were unable to recover it. 00:34:12.879 [2024-07-26 01:16:43.177133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.879 [2024-07-26 01:16:43.177180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.879 qpair failed and we were unable to recover it. 00:34:12.879 [2024-07-26 01:16:43.177353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.879 [2024-07-26 01:16:43.177382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.879 qpair failed and we were unable to recover it. 00:34:12.879 [2024-07-26 01:16:43.177565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.879 [2024-07-26 01:16:43.177596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.879 qpair failed and we were unable to recover it. 00:34:12.879 [2024-07-26 01:16:43.177839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.879 [2024-07-26 01:16:43.177890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.879 qpair failed and we were unable to recover it. 00:34:12.879 [2024-07-26 01:16:43.178068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.879 [2024-07-26 01:16:43.178099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.879 qpair failed and we were unable to recover it. 00:34:12.879 [2024-07-26 01:16:43.178288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.879 [2024-07-26 01:16:43.178316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.879 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-26 01:16:43.178462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-26 01:16:43.178493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-26 01:16:43.178643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-26 01:16:43.178673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-26 01:16:43.178787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-26 01:16:43.178817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-26 01:16:43.178969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-26 01:16:43.178997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-26 01:16:43.179137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-26 01:16:43.179165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-26 01:16:43.179304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-26 01:16:43.179332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-26 01:16:43.179469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-26 01:16:43.179497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-26 01:16:43.179632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-26 01:16:43.179665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-26 01:16:43.179816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-26 01:16:43.179846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-26 01:16:43.179984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-26 01:16:43.180014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-26 01:16:43.180166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-26 01:16:43.180200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-26 01:16:43.180361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-26 01:16:43.180391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-26 01:16:43.180528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-26 01:16:43.180573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-26 01:16:43.180747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-26 01:16:43.180777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-26 01:16:43.180923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-26 01:16:43.180954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-26 01:16:43.181106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-26 01:16:43.181134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-26 01:16:43.181270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-26 01:16:43.181299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-26 01:16:43.181499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-26 01:16:43.181532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-26 01:16:43.181720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-26 01:16:43.181748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-26 01:16:43.181880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-26 01:16:43.181908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-26 01:16:43.182047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-26 01:16:43.182097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-26 01:16:43.182278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-26 01:16:43.182309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-26 01:16:43.182456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-26 01:16:43.182487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-26 01:16:43.182675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-26 01:16:43.182703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-26 01:16:43.182861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-26 01:16:43.182893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-26 01:16:43.183073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-26 01:16:43.183118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-26 01:16:43.183250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-26 01:16:43.183280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-26 01:16:43.183444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-26 01:16:43.183472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-26 01:16:43.183605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-26 01:16:43.183633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-26 01:16:43.183760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-26 01:16:43.183787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-26 01:16:43.183984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-26 01:16:43.184015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-26 01:16:43.184165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-26 01:16:43.184193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-26 01:16:43.184310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-26 01:16:43.184338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-26 01:16:43.184509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-26 01:16:43.184541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-26 01:16:43.184766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-26 01:16:43.184821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-26 01:16:43.184990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-26 01:16:43.185018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.880 qpair failed and we were unable to recover it. 00:34:12.880 [2024-07-26 01:16:43.185166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.880 [2024-07-26 01:16:43.185193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-26 01:16:43.185327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-26 01:16:43.185371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-26 01:16:43.185642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-26 01:16:43.185697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-26 01:16:43.185828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-26 01:16:43.185855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-26 01:16:43.185998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-26 01:16:43.186026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-26 01:16:43.186223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-26 01:16:43.186253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-26 01:16:43.186402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-26 01:16:43.186431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-26 01:16:43.186586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-26 01:16:43.186612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-26 01:16:43.186720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-26 01:16:43.186746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-26 01:16:43.186902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-26 01:16:43.186930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-26 01:16:43.187079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-26 01:16:43.187110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-26 01:16:43.187241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-26 01:16:43.187272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-26 01:16:43.187410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-26 01:16:43.187437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-26 01:16:43.187571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-26 01:16:43.187598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-26 01:16:43.187781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-26 01:16:43.187822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-26 01:16:43.187961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-26 01:16:43.187994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-26 01:16:43.188161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-26 01:16:43.188205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-26 01:16:43.188319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-26 01:16:43.188348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-26 01:16:43.188627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-26 01:16:43.188681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-26 01:16:43.188865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-26 01:16:43.188891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-26 01:16:43.189007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-26 01:16:43.189054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-26 01:16:43.189235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-26 01:16:43.189265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-26 01:16:43.189409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-26 01:16:43.189438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-26 01:16:43.189572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-26 01:16:43.189599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-26 01:16:43.189709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-26 01:16:43.189736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-26 01:16:43.189871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-26 01:16:43.189898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-26 01:16:43.190075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-26 01:16:43.190110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-26 01:16:43.190245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-26 01:16:43.190272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-26 01:16:43.190437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-26 01:16:43.190480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-26 01:16:43.190657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-26 01:16:43.190687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.881 qpair failed and we were unable to recover it. 00:34:12.881 [2024-07-26 01:16:43.190860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.881 [2024-07-26 01:16:43.190890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-26 01:16:43.191068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-26 01:16:43.191098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-26 01:16:43.191241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-26 01:16:43.191269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-26 01:16:43.191445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-26 01:16:43.191488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-26 01:16:43.191775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-26 01:16:43.191826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-26 01:16:43.192007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-26 01:16:43.192034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-26 01:16:43.192177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-26 01:16:43.192215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-26 01:16:43.192376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-26 01:16:43.192407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-26 01:16:43.192627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-26 01:16:43.192658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-26 01:16:43.192797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-26 01:16:43.192824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-26 01:16:43.192932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-26 01:16:43.192960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-26 01:16:43.193189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-26 01:16:43.193220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-26 01:16:43.193367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-26 01:16:43.193401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-26 01:16:43.193553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-26 01:16:43.193581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-26 01:16:43.193746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-26 01:16:43.193774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-26 01:16:43.193910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-26 01:16:43.193953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-26 01:16:43.194070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-26 01:16:43.194096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-26 01:16:43.194210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-26 01:16:43.194236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-26 01:16:43.194348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-26 01:16:43.194374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-26 01:16:43.194536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-26 01:16:43.194564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-26 01:16:43.194741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-26 01:16:43.194769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-26 01:16:43.194876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-26 01:16:43.194901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-26 01:16:43.195071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-26 01:16:43.195114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-26 01:16:43.195233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-26 01:16:43.195263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-26 01:16:43.195382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-26 01:16:43.195410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-26 01:16:43.195571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-26 01:16:43.195599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-26 01:16:43.195728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-26 01:16:43.195773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-26 01:16:43.195938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-26 01:16:43.195967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-26 01:16:43.196118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-26 01:16:43.196148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-26 01:16:43.196363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-26 01:16:43.196389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-26 01:16:43.196536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-26 01:16:43.196565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-26 01:16:43.196709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-26 01:16:43.196740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-26 01:16:43.196862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-26 01:16:43.196897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-26 01:16:43.197039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-26 01:16:43.197081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-26 01:16:43.197252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-26 01:16:43.197279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-26 01:16:43.197501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-26 01:16:43.197532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-26 01:16:43.197757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-26 01:16:43.197808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.882 [2024-07-26 01:16:43.197933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.882 [2024-07-26 01:16:43.197960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.882 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-26 01:16:43.198097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-26 01:16:43.198124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-26 01:16:43.198232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-26 01:16:43.198265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-26 01:16:43.198407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-26 01:16:43.198434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-26 01:16:43.198563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-26 01:16:43.198591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-26 01:16:43.198725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-26 01:16:43.198771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-26 01:16:43.198913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-26 01:16:43.198942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-26 01:16:43.199096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-26 01:16:43.199130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-26 01:16:43.199313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-26 01:16:43.199351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-26 01:16:43.199475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-26 01:16:43.199513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-26 01:16:43.199665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-26 01:16:43.199695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-26 01:16:43.199845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-26 01:16:43.199879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-26 01:16:43.200078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-26 01:16:43.200106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-26 01:16:43.200273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-26 01:16:43.200303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-26 01:16:43.200493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-26 01:16:43.200520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-26 01:16:43.200632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-26 01:16:43.200661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-26 01:16:43.200798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-26 01:16:43.200824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-26 01:16:43.200960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-26 01:16:43.201005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-26 01:16:43.201159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-26 01:16:43.201189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-26 01:16:43.201339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-26 01:16:43.201369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-26 01:16:43.201526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-26 01:16:43.201554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-26 01:16:43.201664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-26 01:16:43.201709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-26 01:16:43.201886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-26 01:16:43.201915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-26 01:16:43.202087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-26 01:16:43.202116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-26 01:16:43.202279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-26 01:16:43.202306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-26 01:16:43.202487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-26 01:16:43.202517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-26 01:16:43.202690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-26 01:16:43.202749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-26 01:16:43.202904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-26 01:16:43.202933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-26 01:16:43.203098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-26 01:16:43.203126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-26 01:16:43.203260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-26 01:16:43.203287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-26 01:16:43.203420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-26 01:16:43.203446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-26 01:16:43.203600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-26 01:16:43.203629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-26 01:16:43.203789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-26 01:16:43.203817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-26 01:16:43.203975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-26 01:16:43.204007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-26 01:16:43.204155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-26 01:16:43.204199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-26 01:16:43.204310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-26 01:16:43.204336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-26 01:16:43.204499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-26 01:16:43.204526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.883 [2024-07-26 01:16:43.204704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.883 [2024-07-26 01:16:43.204733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.883 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-26 01:16:43.204884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-26 01:16:43.204915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-26 01:16:43.205082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-26 01:16:43.205110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-26 01:16:43.205226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-26 01:16:43.205252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-26 01:16:43.205421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-26 01:16:43.205463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-26 01:16:43.205609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-26 01:16:43.205638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-26 01:16:43.205790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-26 01:16:43.205819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-26 01:16:43.205975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-26 01:16:43.206003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-26 01:16:43.206151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-26 01:16:43.206179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-26 01:16:43.206317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-26 01:16:43.206345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-26 01:16:43.206451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-26 01:16:43.206476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-26 01:16:43.206640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-26 01:16:43.206667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-26 01:16:43.206822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-26 01:16:43.206853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-26 01:16:43.207029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-26 01:16:43.207070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-26 01:16:43.207239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-26 01:16:43.207271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-26 01:16:43.207406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-26 01:16:43.207434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-26 01:16:43.207548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-26 01:16:43.207574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-26 01:16:43.207702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-26 01:16:43.207728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-26 01:16:43.207847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-26 01:16:43.207876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-26 01:16:43.208029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-26 01:16:43.208055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-26 01:16:43.208227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-26 01:16:43.208266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-26 01:16:43.208439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-26 01:16:43.208466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-26 01:16:43.208627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-26 01:16:43.208674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-26 01:16:43.208848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-26 01:16:43.208876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-26 01:16:43.208983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-26 01:16:43.209009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-26 01:16:43.209192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-26 01:16:43.209228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-26 01:16:43.209358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-26 01:16:43.209393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-26 01:16:43.209542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-26 01:16:43.209569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-26 01:16:43.209706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-26 01:16:43.209733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-26 01:16:43.209896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-26 01:16:43.209926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-26 01:16:43.210076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-26 01:16:43.210107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-26 01:16:43.210290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-26 01:16:43.210317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-26 01:16:43.210428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-26 01:16:43.210473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-26 01:16:43.210649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-26 01:16:43.210679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-26 01:16:43.210827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-26 01:16:43.210856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-26 01:16:43.211015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-26 01:16:43.211042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-26 01:16:43.211172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-26 01:16:43.211214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.884 [2024-07-26 01:16:43.211436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.884 [2024-07-26 01:16:43.211477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.884 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-26 01:16:43.211641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-26 01:16:43.211669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-26 01:16:43.211831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-26 01:16:43.211858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-26 01:16:43.212020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-26 01:16:43.212051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-26 01:16:43.212226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-26 01:16:43.212253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-26 01:16:43.212353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-26 01:16:43.212378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-26 01:16:43.212548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-26 01:16:43.212576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-26 01:16:43.212693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-26 01:16:43.212721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-26 01:16:43.212875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-26 01:16:43.212905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-26 01:16:43.213051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-26 01:16:43.213093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-26 01:16:43.213225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-26 01:16:43.213252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-26 01:16:43.213372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-26 01:16:43.213414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-26 01:16:43.213593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-26 01:16:43.213624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-26 01:16:43.213801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-26 01:16:43.213829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-26 01:16:43.213966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-26 01:16:43.213993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-26 01:16:43.214149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-26 01:16:43.214179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-26 01:16:43.214367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-26 01:16:43.214394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-26 01:16:43.214530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-26 01:16:43.214561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-26 01:16:43.214698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-26 01:16:43.214725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-26 01:16:43.214914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-26 01:16:43.214951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-26 01:16:43.215101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-26 01:16:43.215131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-26 01:16:43.215308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-26 01:16:43.215337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-26 01:16:43.215494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-26 01:16:43.215522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-26 01:16:43.215661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-26 01:16:43.215688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-26 01:16:43.215823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-26 01:16:43.215851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-26 01:16:43.216007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-26 01:16:43.216035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-26 01:16:43.216158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-26 01:16:43.216186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-26 01:16:43.216333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-26 01:16:43.216360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-26 01:16:43.216520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-26 01:16:43.216549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-26 01:16:43.216725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-26 01:16:43.216751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-26 01:16:43.216912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-26 01:16:43.216939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-26 01:16:43.217085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-26 01:16:43.217113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.885 [2024-07-26 01:16:43.217284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.885 [2024-07-26 01:16:43.217330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.885 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-26 01:16:43.217451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-26 01:16:43.217481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-26 01:16:43.217641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-26 01:16:43.217668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-26 01:16:43.217775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-26 01:16:43.217800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-26 01:16:43.217967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-26 01:16:43.218010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-26 01:16:43.218163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-26 01:16:43.218192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-26 01:16:43.218299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-26 01:16:43.218327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-26 01:16:43.218455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-26 01:16:43.218482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-26 01:16:43.218640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-26 01:16:43.218670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-26 01:16:43.218812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-26 01:16:43.218842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-26 01:16:43.218999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-26 01:16:43.219025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-26 01:16:43.219147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-26 01:16:43.219174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-26 01:16:43.219327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-26 01:16:43.219355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-26 01:16:43.219511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-26 01:16:43.219538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-26 01:16:43.219667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-26 01:16:43.219700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-26 01:16:43.219882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-26 01:16:43.219911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-26 01:16:43.220083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-26 01:16:43.220130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-26 01:16:43.220242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-26 01:16:43.220268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-26 01:16:43.220385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-26 01:16:43.220413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-26 01:16:43.220598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-26 01:16:43.220628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-26 01:16:43.220746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-26 01:16:43.220783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-26 01:16:43.220938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-26 01:16:43.220967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-26 01:16:43.221127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-26 01:16:43.221154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-26 01:16:43.221302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-26 01:16:43.221331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-26 01:16:43.221494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-26 01:16:43.221525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-26 01:16:43.221695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-26 01:16:43.221729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-26 01:16:43.221868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-26 01:16:43.221897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-26 01:16:43.222038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-26 01:16:43.222072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-26 01:16:43.222190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-26 01:16:43.222217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-26 01:16:43.222339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-26 01:16:43.222365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-26 01:16:43.222486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-26 01:16:43.222514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-26 01:16:43.222648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-26 01:16:43.222692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-26 01:16:43.222838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-26 01:16:43.222869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-26 01:16:43.223019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-26 01:16:43.223067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-26 01:16:43.223204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-26 01:16:43.223240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-26 01:16:43.223416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-26 01:16:43.223442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-26 01:16:43.223631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-26 01:16:43.223661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-26 01:16:43.223844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.886 [2024-07-26 01:16:43.223875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.886 qpair failed and we were unable to recover it. 00:34:12.886 [2024-07-26 01:16:43.224067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-26 01:16:43.224095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-26 01:16:43.224224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-26 01:16:43.224253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-26 01:16:43.224424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-26 01:16:43.224460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-26 01:16:43.224603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-26 01:16:43.224651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-26 01:16:43.224812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-26 01:16:43.224838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-26 01:16:43.224952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-26 01:16:43.224980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-26 01:16:43.225134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-26 01:16:43.225163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-26 01:16:43.225294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-26 01:16:43.225324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-26 01:16:43.225490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-26 01:16:43.225516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-26 01:16:43.225651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-26 01:16:43.225678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-26 01:16:43.225807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-26 01:16:43.225838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-26 01:16:43.226024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-26 01:16:43.226071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-26 01:16:43.226260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-26 01:16:43.226287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-26 01:16:43.226434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-26 01:16:43.226463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-26 01:16:43.226641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-26 01:16:43.226670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-26 01:16:43.226848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-26 01:16:43.226878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-26 01:16:43.227067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-26 01:16:43.227095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-26 01:16:43.227193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-26 01:16:43.227235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-26 01:16:43.227458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-26 01:16:43.227512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-26 01:16:43.227683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-26 01:16:43.227712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-26 01:16:43.227873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-26 01:16:43.227901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-26 01:16:43.228043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-26 01:16:43.228093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-26 01:16:43.228245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-26 01:16:43.228274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-26 01:16:43.228401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-26 01:16:43.228431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-26 01:16:43.228620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-26 01:16:43.228647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-26 01:16:43.228826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-26 01:16:43.228864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-26 01:16:43.228999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-26 01:16:43.229037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-26 01:16:43.229204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-26 01:16:43.229239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-26 01:16:43.229365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-26 01:16:43.229393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-26 01:16:43.229504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-26 01:16:43.229530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-26 01:16:43.229649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-26 01:16:43.229679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-26 01:16:43.229788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-26 01:16:43.229827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-26 01:16:43.229995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-26 01:16:43.230030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-26 01:16:43.230185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-26 01:16:43.230213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-26 01:16:43.230365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-26 01:16:43.230396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-26 01:16:43.230547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-26 01:16:43.230577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-26 01:16:43.230759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-26 01:16:43.230786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-26 01:16:43.230941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.887 [2024-07-26 01:16:43.230972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.887 qpair failed and we were unable to recover it. 00:34:12.887 [2024-07-26 01:16:43.231133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-26 01:16:43.231161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-26 01:16:43.231301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-26 01:16:43.231328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-26 01:16:43.231461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-26 01:16:43.231487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-26 01:16:43.231600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-26 01:16:43.231644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-26 01:16:43.231798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-26 01:16:43.231828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-26 01:16:43.231954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-26 01:16:43.231984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-26 01:16:43.232150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-26 01:16:43.232178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-26 01:16:43.232286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-26 01:16:43.232311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-26 01:16:43.232440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-26 01:16:43.232469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-26 01:16:43.232619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-26 01:16:43.232648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-26 01:16:43.232781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-26 01:16:43.232808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-26 01:16:43.232950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-26 01:16:43.232982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-26 01:16:43.233136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-26 01:16:43.233181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-26 01:16:43.233329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-26 01:16:43.233359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-26 01:16:43.233538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-26 01:16:43.233564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-26 01:16:43.233671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-26 01:16:43.233714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-26 01:16:43.233831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-26 01:16:43.233861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-26 01:16:43.234024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-26 01:16:43.234053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-26 01:16:43.234199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-26 01:16:43.234227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-26 01:16:43.234366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-26 01:16:43.234393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-26 01:16:43.234528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-26 01:16:43.234555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-26 01:16:43.234710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-26 01:16:43.234753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-26 01:16:43.234935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-26 01:16:43.234961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-26 01:16:43.235074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-26 01:16:43.235125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-26 01:16:43.235281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-26 01:16:43.235311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-26 01:16:43.235459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-26 01:16:43.235493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-26 01:16:43.235603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-26 01:16:43.235631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-26 01:16:43.235763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-26 01:16:43.235791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-26 01:16:43.235954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-26 01:16:43.235999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-26 01:16:43.236155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-26 01:16:43.236200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-26 01:16:43.236338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-26 01:16:43.236365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-26 01:16:43.236501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-26 01:16:43.236528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-26 01:16:43.236715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-26 01:16:43.236742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-26 01:16:43.236873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-26 01:16:43.236899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-26 01:16:43.237076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-26 01:16:43.237106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-26 01:16:43.237263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-26 01:16:43.237291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-26 01:16:43.237422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.888 [2024-07-26 01:16:43.237451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.888 qpair failed and we were unable to recover it. 00:34:12.888 [2024-07-26 01:16:43.237616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-26 01:16:43.237644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-26 01:16:43.237855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-26 01:16:43.237882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-26 01:16:43.238041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-26 01:16:43.238077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-26 01:16:43.238207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-26 01:16:43.238238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-26 01:16:43.238391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-26 01:16:43.238421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-26 01:16:43.238582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-26 01:16:43.238609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-26 01:16:43.238795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-26 01:16:43.238825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-26 01:16:43.238954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-26 01:16:43.238982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-26 01:16:43.239117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-26 01:16:43.239148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-26 01:16:43.239312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-26 01:16:43.239341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-26 01:16:43.239521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-26 01:16:43.239551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-26 01:16:43.239767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-26 01:16:43.239816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-26 01:16:43.239996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-26 01:16:43.240025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-26 01:16:43.240177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-26 01:16:43.240208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-26 01:16:43.240384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-26 01:16:43.240415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-26 01:16:43.240535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-26 01:16:43.240565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-26 01:16:43.240746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-26 01:16:43.240778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-26 01:16:43.240907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-26 01:16:43.240944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-26 01:16:43.241092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-26 01:16:43.241138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-26 01:16:43.241263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-26 01:16:43.241297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-26 01:16:43.241496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-26 01:16:43.241523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-26 01:16:43.241661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-26 01:16:43.241688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-26 01:16:43.241822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-26 01:16:43.241851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-26 01:16:43.241992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-26 01:16:43.242019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-26 01:16:43.242166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-26 01:16:43.242196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-26 01:16:43.242358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-26 01:16:43.242386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-26 01:16:43.242502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-26 01:16:43.242547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-26 01:16:43.242688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-26 01:16:43.242718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-26 01:16:43.242838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-26 01:16:43.242872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-26 01:16:43.242998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-26 01:16:43.243025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-26 01:16:43.243168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-26 01:16:43.243196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-26 01:16:43.243302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-26 01:16:43.243326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-26 01:16:43.243467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-26 01:16:43.243502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-26 01:16:43.243629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-26 01:16:43.243656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-26 01:16:43.243753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-26 01:16:43.243778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-26 01:16:43.243891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-26 01:16:43.243926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-26 01:16:43.244083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-26 01:16:43.244127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-26 01:16:43.244231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-26 01:16:43.244256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-26 01:16:43.244369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.889 [2024-07-26 01:16:43.244396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.889 qpair failed and we were unable to recover it. 00:34:12.889 [2024-07-26 01:16:43.244531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-26 01:16:43.244562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-26 01:16:43.244674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-26 01:16:43.244705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-26 01:16:43.244859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-26 01:16:43.244886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-26 01:16:43.245018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-26 01:16:43.245045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-26 01:16:43.245185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-26 01:16:43.245215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-26 01:16:43.245385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-26 01:16:43.245425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-26 01:16:43.245594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-26 01:16:43.245621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-26 01:16:43.245759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-26 01:16:43.245786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-26 01:16:43.245935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-26 01:16:43.245962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-26 01:16:43.246073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-26 01:16:43.246101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-26 01:16:43.246239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-26 01:16:43.246266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-26 01:16:43.246405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-26 01:16:43.246448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-26 01:16:43.246639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-26 01:16:43.246669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-26 01:16:43.246787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-26 01:16:43.246817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-26 01:16:43.246961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-26 01:16:43.246988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-26 01:16:43.247100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-26 01:16:43.247128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-26 01:16:43.247236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-26 01:16:43.247264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-26 01:16:43.247367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-26 01:16:43.247393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-26 01:16:43.247544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-26 01:16:43.247573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-26 01:16:43.247708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-26 01:16:43.247735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-26 01:16:43.247842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-26 01:16:43.247867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-26 01:16:43.248003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-26 01:16:43.248029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-26 01:16:43.248146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-26 01:16:43.248172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-26 01:16:43.248307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-26 01:16:43.248350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-26 01:16:43.248509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-26 01:16:43.248545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-26 01:16:43.248692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-26 01:16:43.248719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-26 01:16:43.248860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-26 01:16:43.248904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-26 01:16:43.249085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-26 01:16:43.249129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-26 01:16:43.249238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-26 01:16:43.249264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-26 01:16:43.249408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-26 01:16:43.249435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-26 01:16:43.249599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-26 01:16:43.249626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-26 01:16:43.249809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-26 01:16:43.249839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-26 01:16:43.249990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-26 01:16:43.250016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-26 01:16:43.250160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-26 01:16:43.250209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-26 01:16:43.250357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-26 01:16:43.250384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-26 01:16:43.250522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.890 [2024-07-26 01:16:43.250547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.890 qpair failed and we were unable to recover it. 00:34:12.890 [2024-07-26 01:16:43.250660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-26 01:16:43.250693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-26 01:16:43.250858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-26 01:16:43.250888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-26 01:16:43.251040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-26 01:16:43.251075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-26 01:16:43.251187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-26 01:16:43.251216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-26 01:16:43.251329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-26 01:16:43.251356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-26 01:16:43.251515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-26 01:16:43.251545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-26 01:16:43.251683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-26 01:16:43.251711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-26 01:16:43.251820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-26 01:16:43.251848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-26 01:16:43.251997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-26 01:16:43.252026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-26 01:16:43.252182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-26 01:16:43.252210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:12.891 [2024-07-26 01:16:43.252319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.891 [2024-07-26 01:16:43.252346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:12.891 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.252488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.252532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.252679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.252710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.252838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.252868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.253087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.253116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.253337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.253367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.253525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.253552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.253662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.253688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.253833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.253862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.253977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.254004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.254173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.254204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.254321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.254347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.254514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.254541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.254647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.254672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.254837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.254871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.255035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.255082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.255217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.255244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.255384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.255411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.255560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.255590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.255741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.255768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.255931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.255958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.256097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.256128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.256294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.256327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.256451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.256480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.256666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.256693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.256891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.256931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.257089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.257120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.257264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.257299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.257460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.257488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.257626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.257670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.257788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.257816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.257934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.257965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.258116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.258144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.258257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.258282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.258442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.258471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.258620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.258649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.258806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.258836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.258977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.259023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.259183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.259214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.259387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.259416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.259601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.259629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.259792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.259822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.259967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.259998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.260145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.260174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.260307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.260334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.260471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.260516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.260643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.260673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.260849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.260883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.261073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.261101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.261263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.261293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.261486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.261538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.261691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.261720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.261897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.261933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.262051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.262123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.262313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.262340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.262442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.262469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.262575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.262602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.262714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.262744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.262891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.262918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.263096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.263127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.164 [2024-07-26 01:16:43.263253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.164 [2024-07-26 01:16:43.263280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.164 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.263414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.263441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.263615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.263641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.263806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.263834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.263958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.263995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.264124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.264166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.264327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.264371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.264520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.264553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.264698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.264724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.264901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.264932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.265106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.265134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.265296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.265340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.265483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.265509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.265688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.265717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.265839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.265876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.266021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.266052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.266226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.266253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.266388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.266434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.266683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.266733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.266885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.266924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.267084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.267112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.267271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.267301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.267498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.267550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.267690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.267720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.267848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.267882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.268057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.268110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.268267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.268296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.268473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.268500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.268636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.268662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.268801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.268846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.268998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.269029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.269226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.269256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.269444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.269478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.269629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.269659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.269840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.269869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.270016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.270047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.270202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.270230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.270391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.270435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.270648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.270700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.270848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.270877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.271038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.271075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.271194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.271221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.271346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.271372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.271526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.271557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.271737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.271764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.271901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.271946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.272052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.272094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.272253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.272289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.272447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.272474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.272610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.272656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.272777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.272807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.272991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.273020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.273190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.273218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.165 [2024-07-26 01:16:43.273375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.165 [2024-07-26 01:16:43.273406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.165 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.273546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.273576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.273686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.273716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.273891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.273918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.274076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.274106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.274265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.274301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.274450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.274480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.274644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.274671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.274815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.274842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.274980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.275007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.275157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.275187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.275392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.275419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.275579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.275608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.275788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.275819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.275944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.275973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.276134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.276161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.276328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.276359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.276505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.276550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.276705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.276734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.276915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.276942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.277085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.277112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.277260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.277310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.277430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.277461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.277618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.277645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.277785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.277811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.277947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.277973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.278107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.278135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.278286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.278314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.278455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.278500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.278653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.278682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.278835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.278864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.279020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.279047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.279194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.279222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.279336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.279370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.279573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.279601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.279764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.279791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.279950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.279980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.280101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.280129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.280289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.280320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.280483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.280511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.280614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.280659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.280808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.280837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.281028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.281057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.281232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.281259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.281414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.281444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.281568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.281599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.281762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.281789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.281919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.281946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.282088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.282132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.282308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.282335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.282470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.282521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.282686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.282714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.282848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.282892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.283073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.283101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.166 [2024-07-26 01:16:43.283226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.166 [2024-07-26 01:16:43.283253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.166 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.283418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.283448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.283580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.283611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.283755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.283785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.283897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.283937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.284124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.284152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.284266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.284310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.284461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.284495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.284643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.284672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.284797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.284822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.284995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.285024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.285178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.285209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.285362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.285391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.285550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.285577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.285715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.285741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.285852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.285877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.286001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.286032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.286217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.286245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.286435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.286464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.286648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.286674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.286821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.286863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.287016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.287044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.287245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.287276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.287426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.287456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.287612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.287641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.287777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.287803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.287950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.287998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.288177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.288205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.288338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.288365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.288470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.288496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.288613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.288639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.288813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.288840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.288978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.289004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.289122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.289149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.289293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.289320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.289480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.289509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.289692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.289719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.289881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.289908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.290014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.290057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.290192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.290223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.290383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.290414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.290569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.290596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.290732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.290760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.290889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.290915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.291073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.291104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.291283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.291314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.291493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.291522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.291641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.291693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.291838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.291865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.167 [2024-07-26 01:16:43.291995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.167 [2024-07-26 01:16:43.292022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.167 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.292139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.292164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.292311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.292339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.292474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.292501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.292611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.292640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.292813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.292857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.293019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.293046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.293204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.293232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.293389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.293416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.293553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.293580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.293730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.293760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.293906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.293935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.294126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.294153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.294294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.294323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.294490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.294520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.294667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.294697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.294856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.294883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.295022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.295075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.295209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.295238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.295406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.295433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.295599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.295626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.295800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.295830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.296003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.296033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.296186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.296216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.296365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.296393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.296517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.296561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.296735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.296765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.296938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.296968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.297137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.297164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.297297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.297344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.297505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.297533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.297698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.297742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.297926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.297953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.298051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.298108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.298239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.298267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.298413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.298452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.298601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.298629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.298742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.298784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.298929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.298961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.299107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.299134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.299265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.299297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.299489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.299519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.299640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.299669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.299844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.299873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.300035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.300068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.300187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.300212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.300343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.300376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.300519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.300547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.300677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.300703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.300834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.300877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.301036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.301069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.301207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.301234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.301413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.301444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.168 [2024-07-26 01:16:43.301555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.168 [2024-07-26 01:16:43.301590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.168 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.301749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.301791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.301942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.301973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.302132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.302159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.302335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.302365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.302678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.302742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.302892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.302922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.303079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.303106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.303206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.303231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.303381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.303410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.303551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.303584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.303729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.303756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.303919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.303949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.304121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.304151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.304296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.304327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.304509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.304540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.304673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.304703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.304852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.304881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.305029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.305086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.305227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.305254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.305432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.305462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.305648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.305680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.305857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.305901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.306070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.306099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.306250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.306278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.306404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.306440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.306594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.306623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.306773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.306800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.306931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.306973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.307123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.307154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.307292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.307320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.307457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.307484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.307624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.307668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.307800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.307828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.307988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.308014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.308206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.308241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.308354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.308381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.308541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.308567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.308701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.308731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.308865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.308892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.309006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.309034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.309202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.309233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.309384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.309414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.309544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.309570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.309709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.309735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.309922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.309952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.310072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.310106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.310271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.310299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.310487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.310517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.310660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.310690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.310819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.310848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.311029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.311067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.311205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.311234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.169 [2024-07-26 01:16:43.311385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.169 [2024-07-26 01:16:43.311412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.169 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.311616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.311649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.311760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.311786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.311920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.311946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.312090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.312146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.312277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.312306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.312491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.312518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.312627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.312653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.312791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.312818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.312997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.313024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.313192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.313220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.313364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.313394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.313560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.313594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.313722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.313749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.313885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.313912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.314102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.314134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.314272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.314317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.314429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.314456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.314587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.314613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.314750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.314777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.314887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.314914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.315091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.315130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.315296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.315323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.315481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.315510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.315691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.315720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.315846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.315875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.316073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.316117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.316253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.316280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.316488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.316515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.316647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.316690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.316871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.316898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.317046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.317090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.317221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.317250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.317408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.317438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.317588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.317616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.317754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.317798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.317916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.317944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.318099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.318130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.318299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.318326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.318465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.318491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.318592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.318616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.318743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.318782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.318939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.318969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.319131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.319176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.319296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.319325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.319502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.319529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.319666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.319693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.319826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.319878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.170 qpair failed and we were unable to recover it. 00:34:13.170 [2024-07-26 01:16:43.319998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.170 [2024-07-26 01:16:43.320044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.320188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.320215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.320365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.320392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.320533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.320560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.320722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.320762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.320929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.320957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.321086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.321113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.321229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.321256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.321420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.321450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.321597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.321627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.321817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.321845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.321953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.321979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.322089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.322124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.322261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.322290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.322479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.322505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.322679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.322706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.322851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.322879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.323004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.323035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.323203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.323228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.323361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.323402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.323644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.323692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.323875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.323899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.324049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.324082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.324226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.324251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.324409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.324451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.324561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.324588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.324766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.324790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.324896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.324929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.325049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.325082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.325247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.325276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.325437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.325462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.325598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.325638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.325763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.325790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.325975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.326001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.326116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.326142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.326274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.326298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.326457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.326485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.326639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.326665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.326770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.326795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.326909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.326943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.327114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.327152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.327295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.327323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.327481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.327506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.327617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.327642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.327776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.327808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.327964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.327995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.328153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.328178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.328315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.328340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.328477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.328505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.328690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.328721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.328854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.328889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.329066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.329093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.329209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.329236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.329392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.329421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.329580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.329606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.329753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.171 [2024-07-26 01:16:43.329801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.171 qpair failed and we were unable to recover it. 00:34:13.171 [2024-07-26 01:16:43.329927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.329960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.330117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.330147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.330307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.330334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.330441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.330468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.330636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.330680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.330849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.330877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.331039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.331074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.331271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.331298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.331460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.331487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.331627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.331657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.331778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.331805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.331972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.331999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.332172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.332203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.332326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.332356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.332537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.332563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.332754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.332785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.332947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.332973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.333108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.333136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.333270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.333296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.333432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.333460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.333608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.333651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.333829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.333858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.334022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.334049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.334241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.334272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.334428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.334461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.334586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.334616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.334769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.334797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.334979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.335008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.335123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.335157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.335301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.335331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.335513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.335540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.335711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.335740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.335892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.335921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.336045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.336084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.336266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.336293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.336442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.336476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.336633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.336663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.336809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.336839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.336996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.337023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.337164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.337191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.337329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.337364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.337476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.337507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.337673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.337700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.337821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.337853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.338022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.338048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.338197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.338232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.338350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.338385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.338525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.338552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.338735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.338764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.338880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.338924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.339073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.339111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.339252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.339280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.339405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.339436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.339547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.339577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.339737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.172 [2024-07-26 01:16:43.339764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.172 qpair failed and we were unable to recover it. 00:34:13.172 [2024-07-26 01:16:43.339903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.339950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.340101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.340132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.340255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.340285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.340422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.340450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.340589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.340616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.340778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.340804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.340991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.341022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.341169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.341197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.341367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.341394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.341653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.341703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.341869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.341914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.342131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.342180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.342320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.342374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.342546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.342618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.342754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.342792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.342937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.342964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.343097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.343126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.343346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.343374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.343508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.343534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.343670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.343697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.343831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.343859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.344001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.344045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.344234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.344264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.344403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.344430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.344569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.344595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.344751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.344780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.344912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.344942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.345150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.345179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.345343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.345373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.345546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.345576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.345719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.345749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.345869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.345897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.346052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.346087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.346220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.346247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.346364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.346391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.346527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.346554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.346710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.346750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.346940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.346970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.347122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.347152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.347331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.347358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.347541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.347570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.347722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.347751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.347868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.347899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.348091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.348118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.348260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.348290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.348433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.348462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.348634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.348664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.348826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.348860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.173 qpair failed and we were unable to recover it. 00:34:13.173 [2024-07-26 01:16:43.349024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.173 [2024-07-26 01:16:43.349055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.349185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.349214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.349366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.349395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.349545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.349572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.349702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.349754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.349930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.349966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.350118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.350148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.350312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.350340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.350440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.350475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.350638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.350667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.350805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.350838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.350969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.350994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.351144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.351172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.351343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.351373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.351531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.351563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.351731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.351758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.351913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.351943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.352110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.352146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.352288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.352316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.352531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.352558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.352710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.352739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.352881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.352911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.353041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.353082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.353220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.353247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.353424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.353451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.353714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.353765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.353957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.353987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.354138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.354166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.354314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.354345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.354496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.354525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.354698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.354727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.354888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.354914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.355097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.355138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.355309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.355340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.355506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.355533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.355695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.355723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.355836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.355881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.356036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.356075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.356205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.356246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.356404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.356431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.356564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.356610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.356801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.356837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.357013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.357040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.357157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.357183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.357288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.357313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.357428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.357460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.357579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.357605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.357717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.357745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.357883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.357926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.358097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.358127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.358267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.358294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.358432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.358465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.174 [2024-07-26 01:16:43.358662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.174 [2024-07-26 01:16:43.358693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.174 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.358878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.358909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.359018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.359045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.359187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.359214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.359324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.359352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.359539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.359570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.359710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.359738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.359876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.359920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.360070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.360114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.360221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.360246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.360402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.360432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.360559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.360587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.360725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.360751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.360878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.360908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.361023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.361071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.361237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.361264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.361372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.361423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.361555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.361585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.361724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.361753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.361889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.361916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.362057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.362093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.362205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.362236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.362434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.362466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.362591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.362618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.362774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.362817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.362996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.363025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.363199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.363236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.363423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.363451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.363596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.363625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.363753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.363783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.363889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.363919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.364048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.364084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.364193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.364221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.364392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.364428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.364570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.364599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.364781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.364808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.364924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.364950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.365081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.365108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.365281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.365312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.365438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.365465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.365599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.365626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.365785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.365815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.365960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.365989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.366125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.366153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.366264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.366292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.366427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.366454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.366605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.366635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.366794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.366821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.366951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.366994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.367159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.367187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.367360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.367387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.367550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.367577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.367707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.367749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.367908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.367935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.368122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.368162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.368328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.175 [2024-07-26 01:16:43.368356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.175 qpair failed and we were unable to recover it. 00:34:13.175 [2024-07-26 01:16:43.368488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.368514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.368675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.368705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.368855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.368884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.369020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.369046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.369184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.369221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.369335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.369362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.369508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.369535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.369676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.369704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.369868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.369913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.370090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.370121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.370246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.370285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.370418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.370445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.370611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.370654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.370828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.370854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.370956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.370981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.371112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.371149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.371337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.371367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.371493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.371520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.371640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.371667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.371772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.371799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.371941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.371968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.372151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.372190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.372320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.372349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.372508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.372535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.372659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.372686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.372795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.372821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.372929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.372956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.373127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.373154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.373296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.373326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.373477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.373507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.373677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.373706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.373873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.373900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.373999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.374024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.374163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.374192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.374386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.374412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.374555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.374581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.374714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.374741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.374897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.374924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.375104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.375132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.375272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.375299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.375441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.375468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.375604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.375631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.375819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.375849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.376029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.376057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.376206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.376244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.376356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.376384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.376567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.376594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.376700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.376728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.376886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.376912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.377117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.176 [2024-07-26 01:16:43.377148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.176 qpair failed and we were unable to recover it. 00:34:13.176 [2024-07-26 01:16:43.377295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.377326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.377475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.377502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.377636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.377663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.377774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.377800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.377976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.378023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.378167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.378196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.378336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.378363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.378517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.378547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.378690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.378720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.378843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.378879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.378989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.379015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.379132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.379160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.379309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.379338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.379493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.379520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.379651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.379694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.379803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.379832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.379984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.380029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.380196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.380224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.380406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.380435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.380622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.380649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.380763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.380788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.380947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.380977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.381139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.381168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.381307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.381351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.381531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.381557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.381693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.381720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.381830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.381857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.381997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.382025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.382183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.382213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.382345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.382372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.382536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.382580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.382729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.382759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.382912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.382943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.383101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.383129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.383264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.383313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.383458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.383488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.383663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.383692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.383879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.383907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.384098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.384129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.384273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.384303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.384461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.384488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.384625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.384652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.384782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.384810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.384931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.384963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.385115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.385146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.385278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.385305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.385415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.385442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.385626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.385653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.385771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.385799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.385942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.385969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.386105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.386133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.386317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.386347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.386525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.177 [2024-07-26 01:16:43.386554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.177 qpair failed and we were unable to recover it. 00:34:13.177 [2024-07-26 01:16:43.386711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.386737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.386940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.386970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.387131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.387158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.387288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.387314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.387449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.387475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.387601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.387644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.387831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.387866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.388007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.388033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.388196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.388223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.388357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.388400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.388590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.388656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.388806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.388840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.389005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.389032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.389201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.389228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.389439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.389491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.389645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.389677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.389822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.389849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.389986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.390012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.390166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.390198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.390371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.390401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.390558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.390590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.390707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.390739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.390887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.390914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.391076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.391103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.391257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.391283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.391412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.391439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.391599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.391653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.391830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.391860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.391997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.392024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.392173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.392200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.392350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.392380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.392501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.392531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.392713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.392741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.392937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.392967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.393100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.393130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.393308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.393338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.393464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.393492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.393651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.393680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.393875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.393903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.394006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.394031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.394198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.394232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.394343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.394368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.394499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.394531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.394648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.394675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.394814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.394867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.395039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.395077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.395205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.395232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.395346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.395373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.395489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.395517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.395654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.395680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.395819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.395847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.396044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.396085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.396266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.396293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.396445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.396475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.396592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.396619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.396806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.396845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.396969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.178 [2024-07-26 01:16:43.396995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.178 qpair failed and we were unable to recover it. 00:34:13.178 [2024-07-26 01:16:43.397129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.397156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.397317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.397346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.397457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.397499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.397642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.397670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.397812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.397844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.397955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.397987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.398150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.398193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.398321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.398348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.398485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.398511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.398691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.398720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.398860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.398888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.399024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.399052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.399216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.399246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.399427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.399453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.399596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.399624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.399801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.399829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.399987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.400017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.400160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.400197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.400324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.400372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.400535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.400563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.400700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.400727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.400890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.400934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.401042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.401104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.401275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.401303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.401435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.401480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.401633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.401660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.401798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.401825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.402002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.402028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.402192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.402229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.402375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.402401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.402560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.402602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.402730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.402757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.402869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.402894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.403050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.403087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.403255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.403283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.403416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.403442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.403577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.403622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.403784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.403810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.403956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.403993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.404168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.404195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.404357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.404409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.404668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.404695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.404809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.404837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.404998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.405038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.405204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.405237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.405398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.405429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.405606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.405636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.405753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.405780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.405915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.405942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.179 [2024-07-26 01:16:43.406127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.179 [2024-07-26 01:16:43.406157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.179 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.406309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.406342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.406480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.406513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.406673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.406716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.406871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.406898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.407033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.407065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.407237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.407270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.407448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.407479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.407623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.407653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.407817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.407847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.407998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.408025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.408147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.408175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.408293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.408327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.408478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.408505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.408668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.408694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.408826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.408853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.408994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.409021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.409208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.409240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.409400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.409427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.409534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.409559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.409755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.409783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.409921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.409947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.410136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.410170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.410290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.410317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.410453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.410481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.410590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.410616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.410783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.410810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.410972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.411017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.411177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.411205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.411316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.411358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.411513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.411540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.411708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.411755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.411916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.411945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.412088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.412125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.412283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.412320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.412503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.412539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.412663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.412692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.412858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.412897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.413026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.413054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.413231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.413274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.413484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.413555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.413697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.413728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.413916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.413944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.414054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.414107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.414251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.414286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.414458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.414488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.414681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.414708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.414860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.414889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.415080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.415109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.415284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.415311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.415439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.415465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.415601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.415628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.415798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.415827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.180 [2024-07-26 01:16:43.415980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.180 [2024-07-26 01:16:43.416011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.180 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.416185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.416213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.416346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.416388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.416507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.416536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.416694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.416723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.416880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.416908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.417072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.417103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.417249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.417278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.417404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.417434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.417601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.417627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.417754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.417796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.417916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.417947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.418071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.418101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.418264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.418291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.418424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.418451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.418614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.418640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.418802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.418829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.419022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.419068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.419216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.419243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.419400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.419429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.419614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.419640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.419750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.419782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.419923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.419953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.420098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.420128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.420281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.420311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.420490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.420516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.420697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.420726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.420846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.420876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.421017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.421047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.421237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.421264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.421404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.421431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.421567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.421595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.421723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.421750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.421917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.421944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.422129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.422159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.422319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.422346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.422470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.422497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.422670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.422696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.422808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.422836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.423027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.423065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.423184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.423213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.423361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.423389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.423519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.423547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.423716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.423743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.423883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.423917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.424071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.424099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.424257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.424284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.424431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.424476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.424623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.424654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.424820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.424847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.424965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.424995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.425182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.425209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.425359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.425388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.425574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.425601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.425741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.425768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.425901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.425928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.426068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.426097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.181 qpair failed and we were unable to recover it. 00:34:13.181 [2024-07-26 01:16:43.426295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.181 [2024-07-26 01:16:43.426322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.426468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.426497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.426677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.426706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.426885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.426912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.427049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.427083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.427219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.427249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.427359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.427387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.427517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.427544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.427678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.427704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.427835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.427861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.428009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.428036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.428210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.428241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.428400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.428426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.428559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.428602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.428789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.428816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.428952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.428979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.429117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.429145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.429326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.429355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.429604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.429660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.429804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.429831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.429994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.430020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.430157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.430187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.430316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.430345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.430493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.430523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.430674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.430700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.430839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.430885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.431033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.431068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.431200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.431228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.431366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.431393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.431493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.431520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.431626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.431653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.431811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.431837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.431998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.432024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.432177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.432204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.432343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.432386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.432526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.432556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.432708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.432734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.432841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.432867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.433049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.433089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.433253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.433280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.433413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.433441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.433592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.433635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.433791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.433820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.434006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.434033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.434204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.434230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.434389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.434423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.434618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.434672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.434820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.434850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.435004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.435031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.435155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.435182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.435344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.435371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.182 [2024-07-26 01:16:43.435554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.182 [2024-07-26 01:16:43.435581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.182 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.435715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.435743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.435901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.435931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.436089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.436117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.436252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.436278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.436451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.436478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.436657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.436686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.436862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.436892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.437071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.437099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.437236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.437262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.437387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.437414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.437612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.437665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.437816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.437846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.438006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.438033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.438172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.438199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.438308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.438351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.438524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.438553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.438734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.438761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.438909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.438938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.439094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.439124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.439276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.439305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.439469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.439496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.439602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.439629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.439790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.439817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.439978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.440007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.440159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.440186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.440344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.440389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.440548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.440574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.440714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.440741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.440843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.440871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.440989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.441015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.441197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.441227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.441372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.441402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.441583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.441609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.441785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.441819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.441970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.441999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.442162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.442190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.442330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.442357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.442494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.442521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.442628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.442655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.442790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.442820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.442925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.442968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.443077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.443105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.443224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.443252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.443407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.443436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.443597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.443624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.443784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.443811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.443965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.443995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.444176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.444207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.444338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.444370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.444502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.444528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.444684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.183 [2024-07-26 01:16:43.444714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.183 qpair failed and we were unable to recover it. 00:34:13.183 [2024-07-26 01:16:43.444836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.444866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.444994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.445021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.445164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.445194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.445334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.445361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.445469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.445495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.445655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.445681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.445811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.445841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.445991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.446020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.446189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.446217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.446325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.446352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.446483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.446510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.446636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.446665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.446779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.446809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.446979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.447009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.447169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.447196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.447293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.447319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.447454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.447483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.447609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.447636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.447769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.447796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.447938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.447980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.448150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.448177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.448308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.448334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.448510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.448546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.448688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.448718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.448850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.448877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.449025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.449051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.449201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.449231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.449416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.449443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.449577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.449619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.449805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.449831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.449968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.449995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.450135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.450163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.450355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.450384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.450513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.450540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.450641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.450667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.450791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.450817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.450981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.451011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.451171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.451199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.451381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.451410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.451572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.451598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.451733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.451761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.451871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.451897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.452006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.452034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.452196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.452225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.452353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.452382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.452542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.452569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.452736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.452778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.452924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.452953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.453107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.453137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.453262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.453289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.453449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.453491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.453708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.453763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.453940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.453970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.454102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.454129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.184 qpair failed and we were unable to recover it. 00:34:13.184 [2024-07-26 01:16:43.454244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.184 [2024-07-26 01:16:43.454270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.454440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.454467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.454627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.454669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.454822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.454848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.455027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.455056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.455214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.455244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.455387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.455416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.455578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.455605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.455736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.455785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.455945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.455972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.456087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.456114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.456251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.456278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.456415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.456442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.456597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.456626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.456774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.456803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.456964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.456990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.457161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.457188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.457338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.457368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.457506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.457536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.457685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.457712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.457875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.457918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.458122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.458177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.458354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.458384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.458512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.458538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.458716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.458745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.458873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.458912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.459055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.459090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.459248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.459274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.459408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.459451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.459597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.459626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.459745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.459774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.459924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.459950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.460091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.460118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.460349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.460378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.460525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.460555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.460718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.460745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.460883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.460927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.461092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.461119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.461251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.461279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.461508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.461535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.461719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.461748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.461969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.461998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.462217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.462247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.462409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.462435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.462566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.462592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.462722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.462751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.462867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.462896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.463057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.463090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.463228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.463254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.463372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.463401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.463584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.463613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.463764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.463791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.463957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.464001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.464152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.464182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.464355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.464384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.185 [2024-07-26 01:16:43.464571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.185 [2024-07-26 01:16:43.464598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.185 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.464780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.464809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.464990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.465016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.465130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.465157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.465323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.465349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.465465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.465492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.465628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.465655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.465841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.465871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.465994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.466021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.466185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.466212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.466393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.466422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.466572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.466602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.466779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.466806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.466958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.466988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.467144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.467172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.467300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.467327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.467485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.467512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.467671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.467700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.467828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.467855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.467968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.467995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.468159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.468190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.468349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.468379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.468581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.468610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.468733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.468762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.468978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.469004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.469143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.469170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.469390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.469418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.469565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.469596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.469724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.469750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.469895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.469924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.470074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.470119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.470270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.470308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.470461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.470488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.470626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.470652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.470811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.470840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.470962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.470993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.471155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.471182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.471364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.471393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.471601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.471653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.471797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.471824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.471978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.472005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.472166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.472193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.472298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.472324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.472523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.472550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.472710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.472737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.472846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.472891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.473075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.473103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.473265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.473291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.186 [2024-07-26 01:16:43.473425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.186 [2024-07-26 01:16:43.473451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.186 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.473614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.473640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.473790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.473820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.473942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.473971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.474113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.474155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.474325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.474354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.474525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.474554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.474724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.474753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.474929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.474955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.475094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.475122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.475229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.475255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.475408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.475437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.475573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.475603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.475742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.475768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.475925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.475955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.476084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.476114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.476270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.476297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.476475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.476504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.476668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.476718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.476869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.476900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.477134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.477162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.477301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.477329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.477546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.477604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.477751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.477780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.477915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.477942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.478081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.478109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.478301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.478331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.478482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.478509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.478644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.478672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.478800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.478827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.479054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.479093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.479243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.479272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.479429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.479456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.479593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.479620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.479786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.479815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.479999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.480025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.480255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.480282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.480459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.480488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.480712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.480771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.480951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.480980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.481116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.481143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.481262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.481289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.481394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.481420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.481592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.481618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.481754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.481781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.481881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.481907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.482096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.482127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.482284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.482314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.482476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.482503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.482613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.482640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.482779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.482806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.482942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.482969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.483080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.483111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.483262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.483288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.483448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.483508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.483620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.483662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.483778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.483805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.483939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.483966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.187 qpair failed and we were unable to recover it. 00:34:13.187 [2024-07-26 01:16:43.484089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.187 [2024-07-26 01:16:43.484117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.484252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.484279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.484447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.484474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.484612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.484639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.484779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.484806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.484950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.484977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.485123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.485151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.485311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.485338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.485578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.485631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.485779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.485809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.485965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.485992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.486168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.486198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.486406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.486466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.486651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.486680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.486802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.486839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.486978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.487005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.487143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.487171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.487360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.487390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.487549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.487579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.487759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.487788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.487949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.487977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.488126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.488169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.488350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.488377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.488514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.488541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.488676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.488703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.488826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.488855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.488993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.489023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.489165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.489193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.489326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.489353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.489513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.489542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.489692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.489718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.489856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.489882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.490044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.490081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.490211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.490240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.490427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.490457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.490611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.490641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.490801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.490828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.490968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.490994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.491176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.491202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.491330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.491373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.491518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.491547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.491692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.491723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.491875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.491902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.492093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.492123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.492250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.492280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.492430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.492459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.492615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.492642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.492785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.492812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.492926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.492953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.493109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.493140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.493291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.493318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.493454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.493480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.493641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.493671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.493790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.493820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.493970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.493997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.494155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.188 [2024-07-26 01:16:43.494200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.188 qpair failed and we were unable to recover it. 00:34:13.188 [2024-07-26 01:16:43.494453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.494504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.494649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.494678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.494817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.494844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.494983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.495010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.495153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.495181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.495324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.495367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.495525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.495552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.495656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.495683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.495821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.495848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.495996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.496038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.496202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.496230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.496366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.496393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.496555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.496585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.496758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.496787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.496934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.496963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.497102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.497129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.497294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.497321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.497482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.497512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.497672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.497703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.497839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.497867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.498004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.498031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.498168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.498198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.498346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.498373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.498533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.498577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.498743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.498770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.498908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.498936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.499102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.499130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.499309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.499339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.499616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.499667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.499814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.499844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.500021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.500047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.500166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.500194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.500339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.500365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.500536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.500563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.500724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.500751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.500866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.500904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.501052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.501102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.501249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.501278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.501400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.501428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.501597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.501624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.501782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.501811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.501960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.501990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.502142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.502170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.502302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.502329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.502521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.502550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.502765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.502792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.502926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.502953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.503089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.503135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.503315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.503344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.503518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.503548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.503729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.189 [2024-07-26 01:16:43.503755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.189 qpair failed and we were unable to recover it. 00:34:13.189 [2024-07-26 01:16:43.503947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.503976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.504137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.504168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.504312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.504341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.504496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.504523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.504655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.504683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.504811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.504838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.504971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.505001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.505168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.505200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.505337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.505364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.505526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.505570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.505721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.505750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.505934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.505960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.506091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.506119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.506283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.506328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.506479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.506505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.506653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.506680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.506784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.506812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.506947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.506979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.507150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.507180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.507338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.507365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.507482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.507509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.507672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.507699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.507903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.507930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.508095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.508122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.508264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.508291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.508405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.508432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.508594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.508636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.508814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.508841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.508992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.509022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.509154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.509184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.509342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.509369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.509526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.509553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.509653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.509680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.509819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.509846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.510011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.510038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.510205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.510231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.510397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.510426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.510584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.510611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.510745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.510772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.510903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.510945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.511089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.511131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.511244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.511270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.511399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.511441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.511601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.511628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.511736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.511764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.511910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.511937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.512111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.512138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.512295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.512326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.512436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.512480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.512652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.512681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.512825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.512854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.512985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.513012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.513174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.513201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.513307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.513349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.513506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.513535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.190 [2024-07-26 01:16:43.513712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.190 [2024-07-26 01:16:43.513739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.190 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.513895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.513926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.514095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.514133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.514298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.514341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.514509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.514535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.514665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.514709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.514889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.514918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.515073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.515117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.515251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.515278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.515415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.515442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.515548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.515575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.515737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.515764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.515892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.515918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.516052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.516100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.516260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.516287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.516455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.516483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.516619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.516646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.516758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.516784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.516971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.517001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.517159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.517187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.517293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.517320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.517433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.517460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.517612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.517641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.517812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.517841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.517965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.517992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.518155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.518199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.518373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.518432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.518595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.518621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.518757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.518785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.518921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.518948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.519111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.519139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.519273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.519299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.519438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.519469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.519602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.519628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.519765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.519792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.519997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.520023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.520165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.520193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.520297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.520323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.520461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.520491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.520641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.520671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.520835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.520861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.520962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.520988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.521156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.521184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.521316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.521363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.521548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.521575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.521724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.521755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.521907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.521937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.522113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.522143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.522298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.522326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.522460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.522504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.522652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.522681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.522824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.522853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.522986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.523014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.523189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.523216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.523375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.523405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.523535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.523561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.523688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.523715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.523842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.191 [2024-07-26 01:16:43.523886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.191 qpair failed and we were unable to recover it. 00:34:13.191 [2024-07-26 01:16:43.523996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.524038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.524197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.524224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.524354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.524381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.524522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.524548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.524712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.524741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.524892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.524921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.525072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.525099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.525212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.525238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.525373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.525401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.525568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.525597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.525754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.525781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.525920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.525946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.526067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.526094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.526251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.526294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.526422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.526453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.526579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.526606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.526766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.526795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.526948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.526977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.527165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.527192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.527326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.527371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.527533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.527560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.527722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.527765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.527886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.527930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.528122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.528149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.528249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.528275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.528457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.528486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.528607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.528635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.528808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.528838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.528952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.528982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.529093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.529138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.529276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.529303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.529448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.529475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.529650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.529680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.529791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.529821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.529999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.530029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.530189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.530217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.530326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.530370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.530517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.530547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.530718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.530748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.530864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.530894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.531042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.531081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.531251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.531277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.531414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.531440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.531573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.531599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.531751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.531780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.531917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.531946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.532131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.532158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.532281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.532310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.532468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.532497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.532635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.532664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.532789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.532815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.532948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.532975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.192 qpair failed and we were unable to recover it. 00:34:13.192 [2024-07-26 01:16:43.533141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.192 [2024-07-26 01:16:43.533170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.533327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.533354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.533496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.533527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.533683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.533712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.533861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.533890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.534031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.534067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.534228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.534255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.534387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.534414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.534620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.534678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.534852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.534879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.534983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.535010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.535140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.535167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.535305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.535332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.535446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.535473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.535644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.535670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.535811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.535838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.536040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.536086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.536249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.536294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.536423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.536449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.536553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.536580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.536715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.536742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.536892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.536921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.537107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.537135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.537249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.537294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.537444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.537473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.537593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.537623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.537785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.537811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.537944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.537971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.538107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.538133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.538336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.538363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.538494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.538520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.538696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.538725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.538870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.538899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.539070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.539101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.539251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.539278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.539412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.539439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.539578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.539605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.539739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.539766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.539904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.539930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.540091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.540119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.540259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.540286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.540456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.540483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.540620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.540650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.540761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.540788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.540923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.540949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.541067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.541095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.541255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.541281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.541433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.541462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.541617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.541644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.541780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.541807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.541942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.541969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.542147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.542177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.542324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.542354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.542499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.542529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.542653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.542680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.542820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.542847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.542984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.543028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-07-26 01:16:43.543187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.193 [2024-07-26 01:16:43.543216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.543346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.543373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.543534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.543561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.543697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.543726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.543845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.543874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.544024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.544051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.544185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.544227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.544365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.544395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.544546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.544576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.544724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.544751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.544860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.544886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.545019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.545045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.545254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.545284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.545414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.545440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.545548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.545575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.545682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.545708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.545842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.545868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.546001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.546028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.546184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.546213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.546384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.546414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.546528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.546557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.546705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.546731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.546870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.546897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.547066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.547097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.547247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.547277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.547432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.547462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.547565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.547592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.547737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.547780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.547909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.547936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.548097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.548125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.548304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.548334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.548471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.548497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.548634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.548661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.548798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.548824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.548961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.549007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.549177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.549205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.549314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.549341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.549476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.549503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.549638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.549682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.549871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.549897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.550065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.550093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.550227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.550253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.550355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.550381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.550560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.550588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.550723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.550750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.550887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.550914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.551051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.551103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.551277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.551307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.551486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.551512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.551643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.551671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.551777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.551804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.551999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.552028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.552213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.552241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.552351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.194 [2024-07-26 01:16:43.552378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-07-26 01:16:43.552539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.552583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.552733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.552763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.552900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.552939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.553138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.553165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.553316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.553346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.553490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.553519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.553697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.553726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.553886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.553912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.554039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.554072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.554227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.554256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.554368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.554399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.554548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.554574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.554717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.554762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.554938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.554967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.555117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.555147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.555297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.555324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.555473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.555499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.555639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.555681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.555794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.555823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.555953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.555979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.556111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.556139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.556298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.556328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.556507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.556534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.556671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.556698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.556835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.556879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.557008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.557038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.557174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.557204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.557338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.557365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.557502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.557529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.557676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.557705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.557876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.557906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.558071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.558099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.558232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.558259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.558355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.558380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.558561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.558591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.558723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.558750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.558883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.558910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.559021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.559048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.559189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.559220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.559386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.559412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.559575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.559604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.559806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.559832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.559992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.560018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.560133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.560159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.560263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.560290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.560453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.560479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.560676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.560706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.560863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.560901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-07-26 01:16:43.561052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.195 [2024-07-26 01:16:43.561098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.196 qpair failed and we were unable to recover it. 00:34:13.196 [2024-07-26 01:16:43.561244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.196 [2024-07-26 01:16:43.561274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.196 qpair failed and we were unable to recover it. 00:34:13.196 [2024-07-26 01:16:43.561432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.196 [2024-07-26 01:16:43.561459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.196 qpair failed and we were unable to recover it. 00:34:13.196 [2024-07-26 01:16:43.561567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.196 [2024-07-26 01:16:43.561593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.196 qpair failed and we were unable to recover it. 00:34:13.196 [2024-07-26 01:16:43.561733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.196 [2024-07-26 01:16:43.561760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.196 qpair failed and we were unable to recover it. 00:34:13.196 [2024-07-26 01:16:43.561894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.196 [2024-07-26 01:16:43.561921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.196 qpair failed and we were unable to recover it. 00:34:13.196 [2024-07-26 01:16:43.562085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.196 [2024-07-26 01:16:43.562115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.196 qpair failed and we were unable to recover it. 00:34:13.196 [2024-07-26 01:16:43.562245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.196 [2024-07-26 01:16:43.562272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.196 qpair failed and we were unable to recover it. 00:34:13.196 [2024-07-26 01:16:43.562399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.196 [2024-07-26 01:16:43.562425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.196 qpair failed and we were unable to recover it. 00:34:13.196 [2024-07-26 01:16:43.562646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.196 [2024-07-26 01:16:43.562702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.196 qpair failed and we were unable to recover it. 00:34:13.196 [2024-07-26 01:16:43.562875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.196 [2024-07-26 01:16:43.562904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.196 qpair failed and we were unable to recover it. 00:34:13.196 [2024-07-26 01:16:43.563053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.196 [2024-07-26 01:16:43.563086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.196 qpair failed and we were unable to recover it. 00:34:13.196 [2024-07-26 01:16:43.563214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.196 [2024-07-26 01:16:43.563255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.196 qpair failed and we were unable to recover it. 00:34:13.196 [2024-07-26 01:16:43.563458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.196 [2024-07-26 01:16:43.563513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.196 qpair failed and we were unable to recover it. 00:34:13.196 [2024-07-26 01:16:43.563691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.196 [2024-07-26 01:16:43.563720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.196 qpair failed and we were unable to recover it. 00:34:13.196 [2024-07-26 01:16:43.563853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.196 [2024-07-26 01:16:43.563880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.196 qpair failed and we were unable to recover it. 00:34:13.196 [2024-07-26 01:16:43.564019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.196 [2024-07-26 01:16:43.564046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.196 qpair failed and we were unable to recover it. 00:34:13.196 [2024-07-26 01:16:43.564198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.196 [2024-07-26 01:16:43.564225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.196 qpair failed and we were unable to recover it. 00:34:13.196 [2024-07-26 01:16:43.564391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.196 [2024-07-26 01:16:43.564421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.196 qpair failed and we were unable to recover it. 00:34:13.196 [2024-07-26 01:16:43.564568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.196 [2024-07-26 01:16:43.564595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.196 qpair failed and we were unable to recover it. 00:34:13.196 [2024-07-26 01:16:43.564758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.196 [2024-07-26 01:16:43.564801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.196 qpair failed and we were unable to recover it. 00:34:13.196 [2024-07-26 01:16:43.564943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.196 [2024-07-26 01:16:43.564972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.196 qpair failed and we were unable to recover it. 00:34:13.196 [2024-07-26 01:16:43.565092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.196 [2024-07-26 01:16:43.565122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.196 qpair failed and we were unable to recover it. 00:34:13.196 [2024-07-26 01:16:43.565303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.196 [2024-07-26 01:16:43.565330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.196 qpair failed and we were unable to recover it. 00:34:13.196 [2024-07-26 01:16:43.565438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.196 [2024-07-26 01:16:43.565483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.196 qpair failed and we were unable to recover it. 00:34:13.196 [2024-07-26 01:16:43.565633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.196 [2024-07-26 01:16:43.565662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.196 qpair failed and we were unable to recover it. 00:34:13.196 [2024-07-26 01:16:43.565797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.196 [2024-07-26 01:16:43.565824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.196 qpair failed and we were unable to recover it. 00:34:13.196 [2024-07-26 01:16:43.565961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.196 [2024-07-26 01:16:43.565988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.196 qpair failed and we were unable to recover it. 00:34:13.196 [2024-07-26 01:16:43.566125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.196 [2024-07-26 01:16:43.566153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.196 qpair failed and we were unable to recover it. 00:34:13.196 [2024-07-26 01:16:43.566286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.196 [2024-07-26 01:16:43.566313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.196 qpair failed and we were unable to recover it. 00:34:13.196 [2024-07-26 01:16:43.566491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.196 [2024-07-26 01:16:43.566522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.196 qpair failed and we were unable to recover it. 00:34:13.196 [2024-07-26 01:16:43.566668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.196 [2024-07-26 01:16:43.566695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.196 qpair failed and we were unable to recover it. 00:34:13.196 [2024-07-26 01:16:43.566872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.196 [2024-07-26 01:16:43.566902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.196 qpair failed and we were unable to recover it. 00:34:13.196 [2024-07-26 01:16:43.567050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.196 [2024-07-26 01:16:43.567087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.196 qpair failed and we were unable to recover it. 00:34:13.196 [2024-07-26 01:16:43.567218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.196 [2024-07-26 01:16:43.567246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.196 qpair failed and we were unable to recover it. 00:34:13.196 [2024-07-26 01:16:43.567389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.197 [2024-07-26 01:16:43.567416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.197 qpair failed and we were unable to recover it. 00:34:13.197 [2024-07-26 01:16:43.567575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.197 [2024-07-26 01:16:43.567618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.197 qpair failed and we were unable to recover it. 00:34:13.197 [2024-07-26 01:16:43.567815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.197 [2024-07-26 01:16:43.567866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.197 qpair failed and we were unable to recover it. 00:34:13.197 [2024-07-26 01:16:43.568028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.197 [2024-07-26 01:16:43.568055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.197 qpair failed and we were unable to recover it. 00:34:13.197 [2024-07-26 01:16:43.568199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.197 [2024-07-26 01:16:43.568226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.197 qpair failed and we were unable to recover it. 00:34:13.197 [2024-07-26 01:16:43.568336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.197 [2024-07-26 01:16:43.568363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.197 qpair failed and we were unable to recover it. 00:34:13.197 [2024-07-26 01:16:43.568505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.197 [2024-07-26 01:16:43.568532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.197 qpair failed and we were unable to recover it. 00:34:13.197 [2024-07-26 01:16:43.568660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.197 [2024-07-26 01:16:43.568689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.197 qpair failed and we were unable to recover it. 00:34:13.197 [2024-07-26 01:16:43.568818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.197 [2024-07-26 01:16:43.568845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.197 qpair failed and we were unable to recover it. 00:34:13.197 [2024-07-26 01:16:43.568984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.197 [2024-07-26 01:16:43.569012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.197 qpair failed and we were unable to recover it. 00:34:13.197 [2024-07-26 01:16:43.569147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.197 [2024-07-26 01:16:43.569175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.197 qpair failed and we were unable to recover it. 00:34:13.197 [2024-07-26 01:16:43.569317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.197 [2024-07-26 01:16:43.569346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.197 qpair failed and we were unable to recover it. 00:34:13.197 [2024-07-26 01:16:43.569475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.197 [2024-07-26 01:16:43.569502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.197 qpair failed and we were unable to recover it. 00:34:13.197 [2024-07-26 01:16:43.569633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.197 [2024-07-26 01:16:43.569657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.197 qpair failed and we were unable to recover it. 00:34:13.197 [2024-07-26 01:16:43.569785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.197 [2024-07-26 01:16:43.569814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.197 qpair failed and we were unable to recover it. 00:34:13.197 [2024-07-26 01:16:43.569982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.197 [2024-07-26 01:16:43.570013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.197 qpair failed and we were unable to recover it. 00:34:13.197 [2024-07-26 01:16:43.570156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.197 [2024-07-26 01:16:43.570183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.197 qpair failed and we were unable to recover it. 00:34:13.197 [2024-07-26 01:16:43.570342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.197 [2024-07-26 01:16:43.570368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.197 qpair failed and we were unable to recover it. 00:34:13.197 [2024-07-26 01:16:43.570496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.197 [2024-07-26 01:16:43.570525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.197 qpair failed and we were unable to recover it. 00:34:13.197 [2024-07-26 01:16:43.570665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.197 [2024-07-26 01:16:43.570694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.197 qpair failed and we were unable to recover it. 00:34:13.197 [2024-07-26 01:16:43.570855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.197 [2024-07-26 01:16:43.570882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.197 qpair failed and we were unable to recover it. 00:34:13.197 [2024-07-26 01:16:43.571016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.197 [2024-07-26 01:16:43.571043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.197 qpair failed and we were unable to recover it. 00:34:13.197 [2024-07-26 01:16:43.571186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.197 [2024-07-26 01:16:43.571229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.197 qpair failed and we were unable to recover it. 00:34:13.197 [2024-07-26 01:16:43.571368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.197 [2024-07-26 01:16:43.571394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.197 qpair failed and we were unable to recover it. 00:34:13.197 [2024-07-26 01:16:43.571521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.197 [2024-07-26 01:16:43.571547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.197 qpair failed and we were unable to recover it. 00:34:13.197 [2024-07-26 01:16:43.571671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.197 [2024-07-26 01:16:43.571697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.197 qpair failed and we were unable to recover it. 00:34:13.197 [2024-07-26 01:16:43.571877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.197 [2024-07-26 01:16:43.571920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.197 qpair failed and we were unable to recover it. 00:34:13.197 [2024-07-26 01:16:43.572076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.197 [2024-07-26 01:16:43.572120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.197 qpair failed and we were unable to recover it. 00:34:13.197 [2024-07-26 01:16:43.572235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.197 [2024-07-26 01:16:43.572262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.197 qpair failed and we were unable to recover it. 00:34:13.197 [2024-07-26 01:16:43.572378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.197 [2024-07-26 01:16:43.572404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.197 qpair failed and we were unable to recover it. 00:34:13.197 [2024-07-26 01:16:43.572594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.197 [2024-07-26 01:16:43.572623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.197 qpair failed and we were unable to recover it. 00:34:13.197 [2024-07-26 01:16:43.572744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.197 [2024-07-26 01:16:43.572774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.197 qpair failed and we were unable to recover it. 00:34:13.197 [2024-07-26 01:16:43.572901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.197 [2024-07-26 01:16:43.572927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.197 qpair failed and we were unable to recover it. 00:34:13.197 [2024-07-26 01:16:43.573070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.197 [2024-07-26 01:16:43.573096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.197 qpair failed and we were unable to recover it. 00:34:13.197 [2024-07-26 01:16:43.573292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.197 [2024-07-26 01:16:43.573321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.197 qpair failed and we were unable to recover it. 00:34:13.197 [2024-07-26 01:16:43.573450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.197 [2024-07-26 01:16:43.573484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.197 qpair failed and we were unable to recover it. 00:34:13.197 [2024-07-26 01:16:43.573643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.197 [2024-07-26 01:16:43.573671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.197 qpair failed and we were unable to recover it. 00:34:13.197 [2024-07-26 01:16:43.573783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.197 [2024-07-26 01:16:43.573811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.197 qpair failed and we were unable to recover it. 00:34:13.197 [2024-07-26 01:16:43.573976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.197 [2024-07-26 01:16:43.574005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.197 qpair failed and we were unable to recover it. 00:34:13.197 [2024-07-26 01:16:43.574164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.197 [2024-07-26 01:16:43.574191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.197 qpair failed and we were unable to recover it. 00:34:13.197 [2024-07-26 01:16:43.574329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.197 [2024-07-26 01:16:43.574355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.197 qpair failed and we were unable to recover it. 00:34:13.198 [2024-07-26 01:16:43.574505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.198 [2024-07-26 01:16:43.574548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.198 qpair failed and we were unable to recover it. 00:34:13.198 [2024-07-26 01:16:43.574671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.198 [2024-07-26 01:16:43.574700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.198 qpair failed and we were unable to recover it. 00:34:13.198 [2024-07-26 01:16:43.574823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.198 [2024-07-26 01:16:43.574852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.198 qpair failed and we were unable to recover it. 00:34:13.198 [2024-07-26 01:16:43.574977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.471 [2024-07-26 01:16:43.575004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.471 qpair failed and we were unable to recover it. 00:34:13.471 [2024-07-26 01:16:43.575143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.471 [2024-07-26 01:16:43.575170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.471 qpair failed and we were unable to recover it. 00:34:13.471 [2024-07-26 01:16:43.575276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.471 [2024-07-26 01:16:43.575303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.471 qpair failed and we were unable to recover it. 00:34:13.471 [2024-07-26 01:16:43.575416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.471 [2024-07-26 01:16:43.575442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.471 qpair failed and we were unable to recover it. 00:34:13.471 [2024-07-26 01:16:43.575557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.471 [2024-07-26 01:16:43.575583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.471 qpair failed and we were unable to recover it. 00:34:13.472 [2024-07-26 01:16:43.575688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.472 [2024-07-26 01:16:43.575715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.472 qpair failed and we were unable to recover it. 00:34:13.472 [2024-07-26 01:16:43.575851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.472 [2024-07-26 01:16:43.575880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.472 qpair failed and we were unable to recover it. 00:34:13.472 [2024-07-26 01:16:43.576004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.472 [2024-07-26 01:16:43.576034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.472 qpair failed and we were unable to recover it. 00:34:13.472 [2024-07-26 01:16:43.576188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.472 [2024-07-26 01:16:43.576215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.472 qpair failed and we were unable to recover it. 00:34:13.472 [2024-07-26 01:16:43.576344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.472 [2024-07-26 01:16:43.576371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.472 qpair failed and we were unable to recover it. 00:34:13.472 [2024-07-26 01:16:43.576535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.472 [2024-07-26 01:16:43.576561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.472 qpair failed and we were unable to recover it. 00:34:13.472 [2024-07-26 01:16:43.576678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.472 [2024-07-26 01:16:43.576704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.472 qpair failed and we were unable to recover it. 00:34:13.472 [2024-07-26 01:16:43.576834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.472 [2024-07-26 01:16:43.576861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.472 qpair failed and we were unable to recover it. 00:34:13.472 [2024-07-26 01:16:43.577043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.472 [2024-07-26 01:16:43.577089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.472 qpair failed and we were unable to recover it. 00:34:13.472 [2024-07-26 01:16:43.577249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.472 [2024-07-26 01:16:43.577275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.472 qpair failed and we were unable to recover it. 00:34:13.472 [2024-07-26 01:16:43.577410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.472 [2024-07-26 01:16:43.577436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.472 qpair failed and we were unable to recover it. 00:34:13.472 [2024-07-26 01:16:43.577566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.472 [2024-07-26 01:16:43.577592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.472 qpair failed and we were unable to recover it. 00:34:13.472 [2024-07-26 01:16:43.577704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.472 [2024-07-26 01:16:43.577730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.472 qpair failed and we were unable to recover it. 00:34:13.472 [2024-07-26 01:16:43.577871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.472 [2024-07-26 01:16:43.577897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.472 qpair failed and we were unable to recover it. 00:34:13.472 [2024-07-26 01:16:43.578013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.472 [2024-07-26 01:16:43.578039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.472 qpair failed and we were unable to recover it. 00:34:13.472 [2024-07-26 01:16:43.578164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.472 [2024-07-26 01:16:43.578191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.472 qpair failed and we were unable to recover it. 00:34:13.472 [2024-07-26 01:16:43.578330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.472 [2024-07-26 01:16:43.578374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.472 qpair failed and we were unable to recover it. 00:34:13.472 [2024-07-26 01:16:43.578497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.472 [2024-07-26 01:16:43.578526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.472 qpair failed and we were unable to recover it. 00:34:13.472 [2024-07-26 01:16:43.578682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.472 [2024-07-26 01:16:43.578711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.472 qpair failed and we were unable to recover it. 00:34:13.472 [2024-07-26 01:16:43.578853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.472 [2024-07-26 01:16:43.578880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.472 qpair failed and we were unable to recover it. 00:34:13.472 [2024-07-26 01:16:43.579015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.472 [2024-07-26 01:16:43.579057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.472 qpair failed and we were unable to recover it. 00:34:13.472 [2024-07-26 01:16:43.579212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.472 [2024-07-26 01:16:43.579242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.472 qpair failed and we were unable to recover it. 00:34:13.472 [2024-07-26 01:16:43.579370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.472 [2024-07-26 01:16:43.579398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.472 qpair failed and we were unable to recover it. 00:34:13.472 [2024-07-26 01:16:43.579556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.472 [2024-07-26 01:16:43.579583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.472 qpair failed and we were unable to recover it. 00:34:13.472 [2024-07-26 01:16:43.579691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.472 [2024-07-26 01:16:43.579718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.472 qpair failed and we were unable to recover it. 00:34:13.472 [2024-07-26 01:16:43.579827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.472 [2024-07-26 01:16:43.579853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.472 qpair failed and we were unable to recover it. 00:34:13.472 [2024-07-26 01:16:43.579982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.472 [2024-07-26 01:16:43.580012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.472 qpair failed and we were unable to recover it. 00:34:13.472 [2024-07-26 01:16:43.580122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.472 [2024-07-26 01:16:43.580148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.472 qpair failed and we were unable to recover it. 00:34:13.472 [2024-07-26 01:16:43.580284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.472 [2024-07-26 01:16:43.580310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.472 qpair failed and we were unable to recover it. 00:34:13.472 [2024-07-26 01:16:43.580415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.472 [2024-07-26 01:16:43.580441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.472 qpair failed and we were unable to recover it. 00:34:13.472 [2024-07-26 01:16:43.580576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.472 [2024-07-26 01:16:43.580602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.472 qpair failed and we were unable to recover it. 00:34:13.472 [2024-07-26 01:16:43.580707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.472 [2024-07-26 01:16:43.580734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.472 qpair failed and we were unable to recover it. 00:34:13.472 [2024-07-26 01:16:43.580919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.472 [2024-07-26 01:16:43.580948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.472 qpair failed and we were unable to recover it. 00:34:13.472 [2024-07-26 01:16:43.581096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.472 [2024-07-26 01:16:43.581127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.472 qpair failed and we were unable to recover it. 00:34:13.472 [2024-07-26 01:16:43.581270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.472 [2024-07-26 01:16:43.581300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.472 qpair failed and we were unable to recover it. 00:34:13.472 [2024-07-26 01:16:43.581453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.472 [2024-07-26 01:16:43.581479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.472 qpair failed and we were unable to recover it. 00:34:13.472 [2024-07-26 01:16:43.581589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.472 [2024-07-26 01:16:43.581616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.472 qpair failed and we were unable to recover it. 00:34:13.473 [2024-07-26 01:16:43.581792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.473 [2024-07-26 01:16:43.581819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.473 qpair failed and we were unable to recover it. 00:34:13.473 [2024-07-26 01:16:43.581955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.473 [2024-07-26 01:16:43.581981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.473 qpair failed and we were unable to recover it. 00:34:13.473 [2024-07-26 01:16:43.582122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.473 [2024-07-26 01:16:43.582149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.473 qpair failed and we were unable to recover it. 00:34:13.473 [2024-07-26 01:16:43.582287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.473 [2024-07-26 01:16:43.582313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.473 qpair failed and we were unable to recover it. 00:34:13.473 [2024-07-26 01:16:43.582544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.473 [2024-07-26 01:16:43.582602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.473 qpair failed and we were unable to recover it. 00:34:13.473 [2024-07-26 01:16:43.582746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.473 [2024-07-26 01:16:43.582775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.473 qpair failed and we were unable to recover it. 00:34:13.473 [2024-07-26 01:16:43.582905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.473 [2024-07-26 01:16:43.582932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.473 qpair failed and we were unable to recover it. 00:34:13.473 [2024-07-26 01:16:43.583035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.473 [2024-07-26 01:16:43.583069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.473 qpair failed and we were unable to recover it. 00:34:13.473 [2024-07-26 01:16:43.583224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.473 [2024-07-26 01:16:43.583253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.473 qpair failed and we were unable to recover it. 00:34:13.473 [2024-07-26 01:16:43.583425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.473 [2024-07-26 01:16:43.583454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.473 qpair failed and we were unable to recover it. 00:34:13.473 [2024-07-26 01:16:43.583590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.473 [2024-07-26 01:16:43.583616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.473 qpair failed and we were unable to recover it. 00:34:13.473 [2024-07-26 01:16:43.583755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.473 [2024-07-26 01:16:43.583781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.473 qpair failed and we were unable to recover it. 00:34:13.473 [2024-07-26 01:16:43.583970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.473 [2024-07-26 01:16:43.583998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.473 qpair failed and we were unable to recover it. 00:34:13.473 [2024-07-26 01:16:43.584134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.473 [2024-07-26 01:16:43.584163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.473 qpair failed and we were unable to recover it. 00:34:13.473 [2024-07-26 01:16:43.584285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.473 [2024-07-26 01:16:43.584312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.473 qpair failed and we were unable to recover it. 00:34:13.473 [2024-07-26 01:16:43.584473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.473 [2024-07-26 01:16:43.584500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.473 qpair failed and we were unable to recover it. 00:34:13.473 [2024-07-26 01:16:43.584645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.473 [2024-07-26 01:16:43.584674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.473 qpair failed and we were unable to recover it. 00:34:13.473 [2024-07-26 01:16:43.584846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.473 [2024-07-26 01:16:43.584875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.473 qpair failed and we were unable to recover it. 00:34:13.473 [2024-07-26 01:16:43.585037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.473 [2024-07-26 01:16:43.585078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.473 qpair failed and we were unable to recover it. 00:34:13.473 [2024-07-26 01:16:43.585192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.473 [2024-07-26 01:16:43.585235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.473 qpair failed and we were unable to recover it. 00:34:13.473 [2024-07-26 01:16:43.585352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.473 [2024-07-26 01:16:43.585381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.473 qpair failed and we were unable to recover it. 00:34:13.473 [2024-07-26 01:16:43.585488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.473 [2024-07-26 01:16:43.585517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.473 qpair failed and we were unable to recover it. 00:34:13.473 [2024-07-26 01:16:43.585639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.473 [2024-07-26 01:16:43.585666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.473 qpair failed and we were unable to recover it. 00:34:13.473 [2024-07-26 01:16:43.585806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.473 [2024-07-26 01:16:43.585833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.473 qpair failed and we were unable to recover it. 00:34:13.473 [2024-07-26 01:16:43.585965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.473 [2024-07-26 01:16:43.585991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.473 qpair failed and we were unable to recover it. 00:34:13.473 [2024-07-26 01:16:43.586124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.473 [2024-07-26 01:16:43.586155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.473 qpair failed and we were unable to recover it. 00:34:13.473 [2024-07-26 01:16:43.586342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.473 [2024-07-26 01:16:43.586368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.473 qpair failed and we were unable to recover it. 00:34:13.473 [2024-07-26 01:16:43.586518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.473 [2024-07-26 01:16:43.586547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.473 qpair failed and we were unable to recover it. 00:34:13.473 [2024-07-26 01:16:43.586687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.473 [2024-07-26 01:16:43.586716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.473 qpair failed and we were unable to recover it. 00:34:13.473 [2024-07-26 01:16:43.586869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.473 [2024-07-26 01:16:43.586900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.473 qpair failed and we were unable to recover it. 00:34:13.473 [2024-07-26 01:16:43.587035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.473 [2024-07-26 01:16:43.587069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.473 qpair failed and we were unable to recover it. 00:34:13.473 [2024-07-26 01:16:43.587255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.473 [2024-07-26 01:16:43.587284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.473 qpair failed and we were unable to recover it. 00:34:13.473 [2024-07-26 01:16:43.587430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.473 [2024-07-26 01:16:43.587460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.473 qpair failed and we were unable to recover it. 00:34:13.473 [2024-07-26 01:16:43.587634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.473 [2024-07-26 01:16:43.587663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.473 qpair failed and we were unable to recover it. 00:34:13.473 [2024-07-26 01:16:43.587795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.473 [2024-07-26 01:16:43.587822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.473 qpair failed and we were unable to recover it. 00:34:13.473 [2024-07-26 01:16:43.587995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.473 [2024-07-26 01:16:43.588022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.473 qpair failed and we were unable to recover it. 00:34:13.473 [2024-07-26 01:16:43.588229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.473 [2024-07-26 01:16:43.588256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.473 qpair failed and we were unable to recover it. 00:34:13.473 [2024-07-26 01:16:43.588380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.474 [2024-07-26 01:16:43.588409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.474 qpair failed and we were unable to recover it. 00:34:13.474 [2024-07-26 01:16:43.588560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.474 [2024-07-26 01:16:43.588586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.474 qpair failed and we were unable to recover it. 00:34:13.474 [2024-07-26 01:16:43.588724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.474 [2024-07-26 01:16:43.588766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.474 qpair failed and we were unable to recover it. 00:34:13.474 [2024-07-26 01:16:43.588952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.474 [2024-07-26 01:16:43.588978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.474 qpair failed and we were unable to recover it. 00:34:13.474 [2024-07-26 01:16:43.589105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.474 [2024-07-26 01:16:43.589132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.474 qpair failed and we were unable to recover it. 00:34:13.474 [2024-07-26 01:16:43.589268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.474 [2024-07-26 01:16:43.589294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.474 qpair failed and we were unable to recover it. 00:34:13.474 [2024-07-26 01:16:43.589410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.474 [2024-07-26 01:16:43.589453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.474 qpair failed and we were unable to recover it. 00:34:13.474 [2024-07-26 01:16:43.589596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.474 [2024-07-26 01:16:43.589625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.474 qpair failed and we were unable to recover it. 00:34:13.474 [2024-07-26 01:16:43.589768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.474 [2024-07-26 01:16:43.589798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.474 qpair failed and we were unable to recover it. 00:34:13.474 [2024-07-26 01:16:43.589921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.474 [2024-07-26 01:16:43.589947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.474 qpair failed and we were unable to recover it. 00:34:13.474 [2024-07-26 01:16:43.590086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.474 [2024-07-26 01:16:43.590114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.474 qpair failed and we were unable to recover it. 00:34:13.474 [2024-07-26 01:16:43.590306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.474 [2024-07-26 01:16:43.590336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.474 qpair failed and we were unable to recover it. 00:34:13.474 [2024-07-26 01:16:43.590483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.474 [2024-07-26 01:16:43.590511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.474 qpair failed and we were unable to recover it. 00:34:13.474 [2024-07-26 01:16:43.590659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.474 [2024-07-26 01:16:43.590685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.474 qpair failed and we were unable to recover it. 00:34:13.474 [2024-07-26 01:16:43.590824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.474 [2024-07-26 01:16:43.590851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.474 qpair failed and we were unable to recover it. 00:34:13.474 [2024-07-26 01:16:43.591013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.474 [2024-07-26 01:16:43.591039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.474 qpair failed and we were unable to recover it. 00:34:13.474 [2024-07-26 01:16:43.591218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.474 [2024-07-26 01:16:43.591247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.474 qpair failed and we were unable to recover it. 00:34:13.474 [2024-07-26 01:16:43.591408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.474 [2024-07-26 01:16:43.591434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.474 qpair failed and we were unable to recover it. 00:34:13.474 [2024-07-26 01:16:43.591546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.474 [2024-07-26 01:16:43.591572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.474 qpair failed and we were unable to recover it. 00:34:13.474 [2024-07-26 01:16:43.591687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.474 [2024-07-26 01:16:43.591713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.474 qpair failed and we were unable to recover it. 00:34:13.474 [2024-07-26 01:16:43.591846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.474 [2024-07-26 01:16:43.591872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.474 qpair failed and we were unable to recover it. 00:34:13.474 [2024-07-26 01:16:43.592002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.474 [2024-07-26 01:16:43.592029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.474 qpair failed and we were unable to recover it. 00:34:13.474 [2024-07-26 01:16:43.592170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.474 [2024-07-26 01:16:43.592197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.474 qpair failed and we were unable to recover it. 00:34:13.474 [2024-07-26 01:16:43.592309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.474 [2024-07-26 01:16:43.592336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.474 qpair failed and we were unable to recover it. 00:34:13.474 [2024-07-26 01:16:43.592500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.474 [2024-07-26 01:16:43.592527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.474 qpair failed and we were unable to recover it. 00:34:13.474 [2024-07-26 01:16:43.592652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.474 [2024-07-26 01:16:43.592678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.474 qpair failed and we were unable to recover it. 00:34:13.474 [2024-07-26 01:16:43.592815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.474 [2024-07-26 01:16:43.592857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.474 qpair failed and we were unable to recover it. 00:34:13.474 [2024-07-26 01:16:43.592970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.474 [2024-07-26 01:16:43.593014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.474 qpair failed and we were unable to recover it. 00:34:13.474 [2024-07-26 01:16:43.593157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.474 [2024-07-26 01:16:43.593184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.474 qpair failed and we were unable to recover it. 00:34:13.474 [2024-07-26 01:16:43.593311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.474 [2024-07-26 01:16:43.593337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.474 qpair failed and we were unable to recover it. 00:34:13.474 [2024-07-26 01:16:43.593452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.474 [2024-07-26 01:16:43.593478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.474 qpair failed and we were unable to recover it. 00:34:13.474 [2024-07-26 01:16:43.593592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.474 [2024-07-26 01:16:43.593618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.474 qpair failed and we were unable to recover it. 00:34:13.474 [2024-07-26 01:16:43.593778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.474 [2024-07-26 01:16:43.593828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.474 qpair failed and we were unable to recover it. 00:34:13.474 [2024-07-26 01:16:43.594000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.474 [2024-07-26 01:16:43.594027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.474 qpair failed and we were unable to recover it. 00:34:13.474 [2024-07-26 01:16:43.594170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.474 [2024-07-26 01:16:43.594197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.474 qpair failed and we were unable to recover it. 00:34:13.474 [2024-07-26 01:16:43.594338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.474 [2024-07-26 01:16:43.594382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.474 qpair failed and we were unable to recover it. 00:34:13.474 [2024-07-26 01:16:43.594530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.474 [2024-07-26 01:16:43.594559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.474 qpair failed and we were unable to recover it. 00:34:13.474 [2024-07-26 01:16:43.594718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.475 [2024-07-26 01:16:43.594745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.475 qpair failed and we were unable to recover it. 00:34:13.475 [2024-07-26 01:16:43.594873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.475 [2024-07-26 01:16:43.594916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.475 qpair failed and we were unable to recover it. 00:34:13.475 [2024-07-26 01:16:43.595081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.475 [2024-07-26 01:16:43.595108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.475 qpair failed and we were unable to recover it. 00:34:13.475 [2024-07-26 01:16:43.595249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.475 [2024-07-26 01:16:43.595275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.475 qpair failed and we were unable to recover it. 00:34:13.475 [2024-07-26 01:16:43.595409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.475 [2024-07-26 01:16:43.595435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.475 qpair failed and we were unable to recover it. 00:34:13.475 [2024-07-26 01:16:43.595611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.475 [2024-07-26 01:16:43.595640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.475 qpair failed and we were unable to recover it. 00:34:13.475 [2024-07-26 01:16:43.595789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.475 [2024-07-26 01:16:43.595818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.475 qpair failed and we were unable to recover it. 00:34:13.475 [2024-07-26 01:16:43.595960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.475 [2024-07-26 01:16:43.595989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.475 qpair failed and we were unable to recover it. 00:34:13.475 [2024-07-26 01:16:43.596101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.475 [2024-07-26 01:16:43.596143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.475 qpair failed and we were unable to recover it. 00:34:13.475 [2024-07-26 01:16:43.596284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.475 [2024-07-26 01:16:43.596311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.475 qpair failed and we were unable to recover it. 00:34:13.475 [2024-07-26 01:16:43.596418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.475 [2024-07-26 01:16:43.596444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.475 qpair failed and we were unable to recover it. 00:34:13.475 [2024-07-26 01:16:43.596584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.475 [2024-07-26 01:16:43.596610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.475 qpair failed and we were unable to recover it. 00:34:13.475 [2024-07-26 01:16:43.596772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.475 [2024-07-26 01:16:43.596799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.475 qpair failed and we were unable to recover it. 00:34:13.475 [2024-07-26 01:16:43.596908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.475 [2024-07-26 01:16:43.596935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.475 qpair failed and we were unable to recover it. 00:34:13.475 [2024-07-26 01:16:43.597034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.475 [2024-07-26 01:16:43.597067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.475 qpair failed and we were unable to recover it. 00:34:13.475 [2024-07-26 01:16:43.597232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.475 [2024-07-26 01:16:43.597258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.475 qpair failed and we were unable to recover it. 00:34:13.475 [2024-07-26 01:16:43.597370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.475 [2024-07-26 01:16:43.597397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.475 qpair failed and we were unable to recover it. 00:34:13.475 [2024-07-26 01:16:43.597572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.475 [2024-07-26 01:16:43.597601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.475 qpair failed and we were unable to recover it. 00:34:13.475 [2024-07-26 01:16:43.597747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.475 [2024-07-26 01:16:43.597777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.475 qpair failed and we were unable to recover it. 00:34:13.475 [2024-07-26 01:16:43.597892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.475 [2024-07-26 01:16:43.597921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.475 qpair failed and we were unable to recover it. 00:34:13.475 [2024-07-26 01:16:43.598084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.475 [2024-07-26 01:16:43.598111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.475 qpair failed and we were unable to recover it. 00:34:13.475 [2024-07-26 01:16:43.598288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.475 [2024-07-26 01:16:43.598318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.475 qpair failed and we were unable to recover it. 00:34:13.475 [2024-07-26 01:16:43.598554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.475 [2024-07-26 01:16:43.598610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.475 qpair failed and we were unable to recover it. 00:34:13.475 [2024-07-26 01:16:43.598758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.475 [2024-07-26 01:16:43.598787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.475 qpair failed and we were unable to recover it. 00:34:13.475 [2024-07-26 01:16:43.598922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.475 [2024-07-26 01:16:43.598948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.475 qpair failed and we were unable to recover it. 00:34:13.475 [2024-07-26 01:16:43.599090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.475 [2024-07-26 01:16:43.599117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.475 qpair failed and we were unable to recover it. 00:34:13.475 [2024-07-26 01:16:43.599282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.475 [2024-07-26 01:16:43.599311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.475 qpair failed and we were unable to recover it. 00:34:13.475 [2024-07-26 01:16:43.599455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.475 [2024-07-26 01:16:43.599484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.475 qpair failed and we were unable to recover it. 00:34:13.475 [2024-07-26 01:16:43.599636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.475 [2024-07-26 01:16:43.599663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.475 qpair failed and we were unable to recover it. 00:34:13.475 [2024-07-26 01:16:43.599804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.475 [2024-07-26 01:16:43.599830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.475 qpair failed and we were unable to recover it. 00:34:13.475 [2024-07-26 01:16:43.599995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.475 [2024-07-26 01:16:43.600039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.475 qpair failed and we were unable to recover it. 00:34:13.475 [2024-07-26 01:16:43.600233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.475 [2024-07-26 01:16:43.600260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.475 qpair failed and we were unable to recover it. 00:34:13.475 [2024-07-26 01:16:43.600422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.475 [2024-07-26 01:16:43.600449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.475 qpair failed and we were unable to recover it. 00:34:13.475 [2024-07-26 01:16:43.600553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.475 [2024-07-26 01:16:43.600580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.475 qpair failed and we were unable to recover it. 00:34:13.475 [2024-07-26 01:16:43.600714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.475 [2024-07-26 01:16:43.600740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.475 qpair failed and we were unable to recover it. 00:34:13.475 [2024-07-26 01:16:43.600864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.475 [2024-07-26 01:16:43.600898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.475 qpair failed and we were unable to recover it. 00:34:13.475 [2024-07-26 01:16:43.601028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.475 [2024-07-26 01:16:43.601055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.475 qpair failed and we were unable to recover it. 00:34:13.475 [2024-07-26 01:16:43.601201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.475 [2024-07-26 01:16:43.601243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.475 qpair failed and we were unable to recover it. 00:34:13.476 [2024-07-26 01:16:43.601404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.476 [2024-07-26 01:16:43.601432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.476 qpair failed and we were unable to recover it. 00:34:13.476 [2024-07-26 01:16:43.601597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.476 [2024-07-26 01:16:43.601641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.476 qpair failed and we were unable to recover it. 00:34:13.476 [2024-07-26 01:16:43.601796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.476 [2024-07-26 01:16:43.601822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.476 qpair failed and we were unable to recover it. 00:34:13.476 [2024-07-26 01:16:43.601960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.476 [2024-07-26 01:16:43.602003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.476 qpair failed and we were unable to recover it. 00:34:13.476 [2024-07-26 01:16:43.602152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.476 [2024-07-26 01:16:43.602181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.476 qpair failed and we were unable to recover it. 00:34:13.476 [2024-07-26 01:16:43.602331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.476 [2024-07-26 01:16:43.602359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.476 qpair failed and we were unable to recover it. 00:34:13.476 [2024-07-26 01:16:43.602542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.476 [2024-07-26 01:16:43.602568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.476 qpair failed and we were unable to recover it. 00:34:13.476 [2024-07-26 01:16:43.602720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.476 [2024-07-26 01:16:43.602748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.476 qpair failed and we were unable to recover it. 00:34:13.476 [2024-07-26 01:16:43.602871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.476 [2024-07-26 01:16:43.602899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.476 qpair failed and we were unable to recover it. 00:34:13.476 [2024-07-26 01:16:43.603022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.476 [2024-07-26 01:16:43.603051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.476 qpair failed and we were unable to recover it. 00:34:13.476 [2024-07-26 01:16:43.603216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.476 [2024-07-26 01:16:43.603242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.476 qpair failed and we were unable to recover it. 00:34:13.476 [2024-07-26 01:16:43.603355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.476 [2024-07-26 01:16:43.603380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.476 qpair failed and we were unable to recover it. 00:34:13.476 [2024-07-26 01:16:43.603535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.476 [2024-07-26 01:16:43.603564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.476 qpair failed and we were unable to recover it. 00:34:13.476 [2024-07-26 01:16:43.603715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.476 [2024-07-26 01:16:43.603744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.476 qpair failed and we were unable to recover it. 00:34:13.476 [2024-07-26 01:16:43.603862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.476 [2024-07-26 01:16:43.603888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.476 qpair failed and we were unable to recover it. 00:34:13.476 [2024-07-26 01:16:43.604024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.476 [2024-07-26 01:16:43.604052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.476 qpair failed and we were unable to recover it. 00:34:13.476 [2024-07-26 01:16:43.604231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.476 [2024-07-26 01:16:43.604260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.476 qpair failed and we were unable to recover it. 00:34:13.476 [2024-07-26 01:16:43.604414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.476 [2024-07-26 01:16:43.604447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.476 qpair failed and we were unable to recover it. 00:34:13.476 [2024-07-26 01:16:43.604571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.476 [2024-07-26 01:16:43.604599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.476 qpair failed and we were unable to recover it. 00:34:13.476 [2024-07-26 01:16:43.604741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.476 [2024-07-26 01:16:43.604768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.476 qpair failed and we were unable to recover it. 00:34:13.476 [2024-07-26 01:16:43.604943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.476 [2024-07-26 01:16:43.604970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.476 qpair failed and we were unable to recover it. 00:34:13.476 [2024-07-26 01:16:43.605099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.476 [2024-07-26 01:16:43.605126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.476 qpair failed and we were unable to recover it. 00:34:13.476 [2024-07-26 01:16:43.605269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.476 [2024-07-26 01:16:43.605297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.476 qpair failed and we were unable to recover it. 00:34:13.476 [2024-07-26 01:16:43.605414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.476 [2024-07-26 01:16:43.605458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.476 qpair failed and we were unable to recover it. 00:34:13.476 [2024-07-26 01:16:43.605620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.476 [2024-07-26 01:16:43.605647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.476 qpair failed and we were unable to recover it. 00:34:13.476 [2024-07-26 01:16:43.605789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.476 [2024-07-26 01:16:43.605817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.476 qpair failed and we were unable to recover it. 00:34:13.476 [2024-07-26 01:16:43.605978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.476 [2024-07-26 01:16:43.606005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.476 qpair failed and we were unable to recover it. 00:34:13.476 [2024-07-26 01:16:43.606155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.476 [2024-07-26 01:16:43.606182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.476 qpair failed and we were unable to recover it. 00:34:13.476 [2024-07-26 01:16:43.606320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.476 [2024-07-26 01:16:43.606347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.476 qpair failed and we were unable to recover it. 00:34:13.476 [2024-07-26 01:16:43.606496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.476 [2024-07-26 01:16:43.606539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.476 qpair failed and we were unable to recover it. 00:34:13.476 [2024-07-26 01:16:43.606710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.476 [2024-07-26 01:16:43.606742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.476 qpair failed and we were unable to recover it. 00:34:13.476 [2024-07-26 01:16:43.606856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.477 [2024-07-26 01:16:43.606883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.477 qpair failed and we were unable to recover it. 00:34:13.477 [2024-07-26 01:16:43.607015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.477 [2024-07-26 01:16:43.607046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.477 qpair failed and we were unable to recover it. 00:34:13.477 [2024-07-26 01:16:43.607193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.477 [2024-07-26 01:16:43.607221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.477 qpair failed and we were unable to recover it. 00:34:13.477 [2024-07-26 01:16:43.607358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.477 [2024-07-26 01:16:43.607384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.477 qpair failed and we were unable to recover it. 00:34:13.477 [2024-07-26 01:16:43.607494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.477 [2024-07-26 01:16:43.607521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.477 qpair failed and we were unable to recover it. 00:34:13.477 [2024-07-26 01:16:43.607673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.477 [2024-07-26 01:16:43.607700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.477 qpair failed and we were unable to recover it. 00:34:13.477 [2024-07-26 01:16:43.607815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.477 [2024-07-26 01:16:43.607846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.477 qpair failed and we were unable to recover it. 00:34:13.477 [2024-07-26 01:16:43.607978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.477 [2024-07-26 01:16:43.608014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.477 qpair failed and we were unable to recover it. 00:34:13.477 [2024-07-26 01:16:43.608166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.477 [2024-07-26 01:16:43.608194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.477 qpair failed and we were unable to recover it. 00:34:13.477 [2024-07-26 01:16:43.608335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.477 [2024-07-26 01:16:43.608362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.477 qpair failed and we were unable to recover it. 00:34:13.477 [2024-07-26 01:16:43.608479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.477 [2024-07-26 01:16:43.608512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.477 qpair failed and we were unable to recover it. 00:34:13.477 [2024-07-26 01:16:43.608621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.477 [2024-07-26 01:16:43.608654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.477 qpair failed and we were unable to recover it. 00:34:13.477 [2024-07-26 01:16:43.608794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.477 [2024-07-26 01:16:43.608838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.477 qpair failed and we were unable to recover it. 00:34:13.477 [2024-07-26 01:16:43.609021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.477 [2024-07-26 01:16:43.609051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.477 qpair failed and we were unable to recover it. 00:34:13.477 [2024-07-26 01:16:43.609214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.477 [2024-07-26 01:16:43.609244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.477 qpair failed and we were unable to recover it. 00:34:13.477 [2024-07-26 01:16:43.609380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.477 [2024-07-26 01:16:43.609409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.477 qpair failed and we were unable to recover it. 00:34:13.477 [2024-07-26 01:16:43.609555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.477 [2024-07-26 01:16:43.609582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.477 qpair failed and we were unable to recover it. 00:34:13.477 [2024-07-26 01:16:43.609746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.477 [2024-07-26 01:16:43.609773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.477 qpair failed and we were unable to recover it. 00:34:13.477 [2024-07-26 01:16:43.609881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.477 [2024-07-26 01:16:43.609909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.477 qpair failed and we were unable to recover it. 00:34:13.477 [2024-07-26 01:16:43.610048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.477 [2024-07-26 01:16:43.610101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.477 qpair failed and we were unable to recover it. 00:34:13.477 [2024-07-26 01:16:43.610242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.477 [2024-07-26 01:16:43.610268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.477 qpair failed and we were unable to recover it. 00:34:13.477 [2024-07-26 01:16:43.610449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.477 [2024-07-26 01:16:43.610479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.477 qpair failed and we were unable to recover it. 00:34:13.477 [2024-07-26 01:16:43.610626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.477 [2024-07-26 01:16:43.610658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.477 qpair failed and we were unable to recover it. 00:34:13.477 [2024-07-26 01:16:43.610859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.477 [2024-07-26 01:16:43.610890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.477 qpair failed and we were unable to recover it. 00:34:13.477 [2024-07-26 01:16:43.611042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.477 [2024-07-26 01:16:43.611078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.477 qpair failed and we were unable to recover it. 00:34:13.477 [2024-07-26 01:16:43.611208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.477 [2024-07-26 01:16:43.611237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.477 qpair failed and we were unable to recover it. 00:34:13.477 [2024-07-26 01:16:43.611373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.477 [2024-07-26 01:16:43.611399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.477 qpair failed and we were unable to recover it. 00:34:13.477 [2024-07-26 01:16:43.611538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.477 [2024-07-26 01:16:43.611583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.477 qpair failed and we were unable to recover it. 00:34:13.477 [2024-07-26 01:16:43.611724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.477 [2024-07-26 01:16:43.611755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.477 qpair failed and we were unable to recover it. 00:34:13.477 [2024-07-26 01:16:43.611940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.477 [2024-07-26 01:16:43.611977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.477 qpair failed and we were unable to recover it. 00:34:13.477 [2024-07-26 01:16:43.612128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.477 [2024-07-26 01:16:43.612163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.477 qpair failed and we were unable to recover it. 00:34:13.477 [2024-07-26 01:16:43.612316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.477 [2024-07-26 01:16:43.612363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.477 qpair failed and we were unable to recover it. 00:34:13.477 [2024-07-26 01:16:43.612490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.477 [2024-07-26 01:16:43.612520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.477 qpair failed and we were unable to recover it. 00:34:13.477 [2024-07-26 01:16:43.612722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.477 [2024-07-26 01:16:43.612771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.477 qpair failed and we were unable to recover it. 00:34:13.477 [2024-07-26 01:16:43.612994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.477 [2024-07-26 01:16:43.613026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.477 qpair failed and we were unable to recover it. 00:34:13.477 [2024-07-26 01:16:43.613208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.477 [2024-07-26 01:16:43.613237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.477 qpair failed and we were unable to recover it. 00:34:13.477 [2024-07-26 01:16:43.613376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.477 [2024-07-26 01:16:43.613406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.477 qpair failed and we were unable to recover it. 00:34:13.477 [2024-07-26 01:16:43.613561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.477 [2024-07-26 01:16:43.613588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.478 qpair failed and we were unable to recover it. 00:34:13.478 [2024-07-26 01:16:43.613702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.478 [2024-07-26 01:16:43.613730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.478 qpair failed and we were unable to recover it. 00:34:13.478 [2024-07-26 01:16:43.613913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.478 [2024-07-26 01:16:43.613944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.478 qpair failed and we were unable to recover it. 00:34:13.478 [2024-07-26 01:16:43.614110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.478 [2024-07-26 01:16:43.614138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.478 qpair failed and we were unable to recover it. 00:34:13.478 [2024-07-26 01:16:43.614299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.478 [2024-07-26 01:16:43.614326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.478 qpair failed and we were unable to recover it. 00:34:13.478 [2024-07-26 01:16:43.614461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.478 [2024-07-26 01:16:43.614494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.478 qpair failed and we were unable to recover it. 00:34:13.478 [2024-07-26 01:16:43.614628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.478 [2024-07-26 01:16:43.614697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.478 qpair failed and we were unable to recover it. 00:34:13.478 [2024-07-26 01:16:43.614865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.478 [2024-07-26 01:16:43.614896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.478 qpair failed and we were unable to recover it. 00:34:13.478 [2024-07-26 01:16:43.615050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.478 [2024-07-26 01:16:43.615086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.478 qpair failed and we were unable to recover it. 00:34:13.478 [2024-07-26 01:16:43.615241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.478 [2024-07-26 01:16:43.615272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.478 qpair failed and we were unable to recover it. 00:34:13.478 [2024-07-26 01:16:43.615393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.478 [2024-07-26 01:16:43.615434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.478 qpair failed and we were unable to recover it. 00:34:13.478 [2024-07-26 01:16:43.615626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.478 [2024-07-26 01:16:43.615673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.478 qpair failed and we were unable to recover it. 00:34:13.478 [2024-07-26 01:16:43.615824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.478 [2024-07-26 01:16:43.615869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.478 qpair failed and we were unable to recover it. 00:34:13.478 [2024-07-26 01:16:43.615980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.478 [2024-07-26 01:16:43.616006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.478 qpair failed and we were unable to recover it. 00:34:13.478 [2024-07-26 01:16:43.616143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.478 [2024-07-26 01:16:43.616171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.478 qpair failed and we were unable to recover it. 00:34:13.478 [2024-07-26 01:16:43.616971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.478 [2024-07-26 01:16:43.617005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.478 qpair failed and we were unable to recover it. 00:34:13.478 [2024-07-26 01:16:43.617193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.478 [2024-07-26 01:16:43.617238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.478 qpair failed and we were unable to recover it. 00:34:13.478 [2024-07-26 01:16:43.617383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.478 [2024-07-26 01:16:43.617427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.478 qpair failed and we were unable to recover it. 00:34:13.478 [2024-07-26 01:16:43.617551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.478 [2024-07-26 01:16:43.617581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.478 qpair failed and we were unable to recover it. 00:34:13.478 [2024-07-26 01:16:43.617738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.478 [2024-07-26 01:16:43.617766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.478 qpair failed and we were unable to recover it. 00:34:13.478 [2024-07-26 01:16:43.617906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.478 [2024-07-26 01:16:43.617932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.478 qpair failed and we were unable to recover it. 00:34:13.478 [2024-07-26 01:16:43.618119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.478 [2024-07-26 01:16:43.618164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.478 qpair failed and we were unable to recover it. 00:34:13.478 [2024-07-26 01:16:43.618323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.478 [2024-07-26 01:16:43.618353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.478 qpair failed and we were unable to recover it. 00:34:13.478 [2024-07-26 01:16:43.618533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.478 [2024-07-26 01:16:43.618578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.478 qpair failed and we were unable to recover it. 00:34:13.478 [2024-07-26 01:16:43.618709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.478 [2024-07-26 01:16:43.618736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.478 qpair failed and we were unable to recover it. 00:34:13.478 [2024-07-26 01:16:43.618841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.478 [2024-07-26 01:16:43.618867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.478 qpair failed and we were unable to recover it. 00:34:13.478 [2024-07-26 01:16:43.618993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.478 [2024-07-26 01:16:43.619020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.478 qpair failed and we were unable to recover it. 00:34:13.478 [2024-07-26 01:16:43.619142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.478 [2024-07-26 01:16:43.619170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.478 qpair failed and we were unable to recover it. 00:34:13.478 [2024-07-26 01:16:43.619304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.478 [2024-07-26 01:16:43.619331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.478 qpair failed and we were unable to recover it. 00:34:13.478 [2024-07-26 01:16:43.619469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.478 [2024-07-26 01:16:43.619496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.478 qpair failed and we were unable to recover it. 00:34:13.478 [2024-07-26 01:16:43.619631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.478 [2024-07-26 01:16:43.619658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.478 qpair failed and we were unable to recover it. 00:34:13.478 [2024-07-26 01:16:43.619814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.478 [2024-07-26 01:16:43.619840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.478 qpair failed and we were unable to recover it. 00:34:13.478 [2024-07-26 01:16:43.619955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.478 [2024-07-26 01:16:43.619981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.478 qpair failed and we were unable to recover it. 00:34:13.478 [2024-07-26 01:16:43.620171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.478 [2024-07-26 01:16:43.620220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.478 qpair failed and we were unable to recover it. 00:34:13.478 [2024-07-26 01:16:43.620339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.478 [2024-07-26 01:16:43.620368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.478 qpair failed and we were unable to recover it. 00:34:13.478 [2024-07-26 01:16:43.620543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.478 [2024-07-26 01:16:43.620586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.478 qpair failed and we were unable to recover it. 00:34:13.478 [2024-07-26 01:16:43.620712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7e620 is same with the state(5) to be set 00:34:13.478 [2024-07-26 01:16:43.620882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.478 [2024-07-26 01:16:43.620922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.478 qpair failed and we were unable to recover it. 00:34:13.478 [2024-07-26 01:16:43.621072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.478 [2024-07-26 01:16:43.621119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.479 qpair failed and we were unable to recover it. 00:34:13.479 [2024-07-26 01:16:43.621283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.479 [2024-07-26 01:16:43.621314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.479 qpair failed and we were unable to recover it. 00:34:13.479 [2024-07-26 01:16:43.621529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.479 [2024-07-26 01:16:43.621558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.479 qpair failed and we were unable to recover it. 00:34:13.479 [2024-07-26 01:16:43.621722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.479 [2024-07-26 01:16:43.621755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.479 qpair failed and we were unable to recover it. 00:34:13.479 [2024-07-26 01:16:43.621917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.479 [2024-07-26 01:16:43.621947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.479 qpair failed and we were unable to recover it. 00:34:13.479 [2024-07-26 01:16:43.622101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.479 [2024-07-26 01:16:43.622129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.479 qpair failed and we were unable to recover it. 00:34:13.479 [2024-07-26 01:16:43.622269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.479 [2024-07-26 01:16:43.622296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.479 qpair failed and we were unable to recover it. 00:34:13.479 [2024-07-26 01:16:43.623191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.479 [2024-07-26 01:16:43.623222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.479 qpair failed and we were unable to recover it. 00:34:13.479 [2024-07-26 01:16:43.623399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.479 [2024-07-26 01:16:43.623433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.479 qpair failed and we were unable to recover it. 00:34:13.479 [2024-07-26 01:16:43.623654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.479 [2024-07-26 01:16:43.623685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.479 qpair failed and we were unable to recover it. 00:34:13.479 [2024-07-26 01:16:43.623820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.479 [2024-07-26 01:16:43.623851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.479 qpair failed and we were unable to recover it. 00:34:13.479 [2024-07-26 01:16:43.623994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.479 [2024-07-26 01:16:43.624020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.479 qpair failed and we were unable to recover it. 00:34:13.479 [2024-07-26 01:16:43.624179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.479 [2024-07-26 01:16:43.624207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.479 qpair failed and we were unable to recover it. 00:34:13.479 [2024-07-26 01:16:43.624339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.479 [2024-07-26 01:16:43.624368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.479 qpair failed and we were unable to recover it. 00:34:13.479 [2024-07-26 01:16:43.624520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.479 [2024-07-26 01:16:43.624551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.479 qpair failed and we were unable to recover it. 00:34:13.479 [2024-07-26 01:16:43.624701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.479 [2024-07-26 01:16:43.624740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.479 qpair failed and we were unable to recover it. 00:34:13.479 [2024-07-26 01:16:43.624924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.479 [2024-07-26 01:16:43.624954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.479 qpair failed and we were unable to recover it. 00:34:13.479 [2024-07-26 01:16:43.625104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.479 [2024-07-26 01:16:43.625131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.479 qpair failed and we were unable to recover it. 00:34:13.479 [2024-07-26 01:16:43.625268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.479 [2024-07-26 01:16:43.625305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.479 qpair failed and we were unable to recover it. 00:34:13.479 [2024-07-26 01:16:43.625416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.479 [2024-07-26 01:16:43.625443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.479 qpair failed and we were unable to recover it. 00:34:13.479 [2024-07-26 01:16:43.625611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.479 [2024-07-26 01:16:43.625639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.479 qpair failed and we were unable to recover it. 00:34:13.479 [2024-07-26 01:16:43.625780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.479 [2024-07-26 01:16:43.625810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.479 qpair failed and we were unable to recover it. 00:34:13.479 [2024-07-26 01:16:43.625968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.479 [2024-07-26 01:16:43.625995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.479 qpair failed and we were unable to recover it. 00:34:13.479 [2024-07-26 01:16:43.626675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.479 [2024-07-26 01:16:43.626709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.479 qpair failed and we were unable to recover it. 00:34:13.479 [2024-07-26 01:16:43.626900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.479 [2024-07-26 01:16:43.626931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.479 qpair failed and we were unable to recover it. 00:34:13.479 [2024-07-26 01:16:43.627092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.479 [2024-07-26 01:16:43.627152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.479 qpair failed and we were unable to recover it. 00:34:13.479 [2024-07-26 01:16:43.627284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.479 [2024-07-26 01:16:43.627316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.479 qpair failed and we were unable to recover it. 00:34:13.479 [2024-07-26 01:16:43.627490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.479 [2024-07-26 01:16:43.627527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.479 qpair failed and we were unable to recover it. 00:34:13.479 [2024-07-26 01:16:43.627749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.479 [2024-07-26 01:16:43.627778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.479 qpair failed and we were unable to recover it. 00:34:13.479 [2024-07-26 01:16:43.627928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.479 [2024-07-26 01:16:43.627959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.479 qpair failed and we were unable to recover it. 00:34:13.479 [2024-07-26 01:16:43.628122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.479 [2024-07-26 01:16:43.628159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.479 qpair failed and we were unable to recover it. 00:34:13.479 [2024-07-26 01:16:43.628302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.479 [2024-07-26 01:16:43.628336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.479 qpair failed and we were unable to recover it. 00:34:13.479 [2024-07-26 01:16:43.628464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.479 [2024-07-26 01:16:43.628493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.479 qpair failed and we were unable to recover it. 00:34:13.479 [2024-07-26 01:16:43.628640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.479 [2024-07-26 01:16:43.628669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.479 qpair failed and we were unable to recover it. 00:34:13.479 [2024-07-26 01:16:43.628883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.479 [2024-07-26 01:16:43.628914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.479 qpair failed and we were unable to recover it. 00:34:13.479 [2024-07-26 01:16:43.629100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.479 [2024-07-26 01:16:43.629128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.479 qpair failed and we were unable to recover it. 00:34:13.479 [2024-07-26 01:16:43.629261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.479 [2024-07-26 01:16:43.629289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.479 qpair failed and we were unable to recover it. 00:34:13.479 [2024-07-26 01:16:43.629406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.479 [2024-07-26 01:16:43.629433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.479 qpair failed and we were unable to recover it. 00:34:13.480 [2024-07-26 01:16:43.629598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.480 [2024-07-26 01:16:43.629639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.480 qpair failed and we were unable to recover it. 00:34:13.480 [2024-07-26 01:16:43.629833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.480 [2024-07-26 01:16:43.629861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.480 qpair failed and we were unable to recover it. 00:34:13.480 [2024-07-26 01:16:43.630048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.480 [2024-07-26 01:16:43.630084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.480 qpair failed and we were unable to recover it. 00:34:13.480 [2024-07-26 01:16:43.630246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.480 [2024-07-26 01:16:43.630285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.480 qpair failed and we were unable to recover it. 00:34:13.480 [2024-07-26 01:16:43.630586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.480 [2024-07-26 01:16:43.630653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.480 qpair failed and we were unable to recover it. 00:34:13.480 [2024-07-26 01:16:43.630788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.480 [2024-07-26 01:16:43.630878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.480 qpair failed and we were unable to recover it. 00:34:13.480 [2024-07-26 01:16:43.631036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.480 [2024-07-26 01:16:43.631076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.480 qpair failed and we were unable to recover it. 00:34:13.480 [2024-07-26 01:16:43.631214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.480 [2024-07-26 01:16:43.631240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.480 qpair failed and we were unable to recover it. 00:34:13.480 [2024-07-26 01:16:43.631374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.480 [2024-07-26 01:16:43.631400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.480 qpair failed and we were unable to recover it. 00:34:13.480 [2024-07-26 01:16:43.631578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.480 [2024-07-26 01:16:43.631639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.480 qpair failed and we were unable to recover it. 00:34:13.480 [2024-07-26 01:16:43.631869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.480 [2024-07-26 01:16:43.631920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.480 qpair failed and we were unable to recover it. 00:34:13.480 [2024-07-26 01:16:43.632080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.480 [2024-07-26 01:16:43.632107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.480 qpair failed and we were unable to recover it. 00:34:13.480 [2024-07-26 01:16:43.632268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.480 [2024-07-26 01:16:43.632294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.480 qpair failed and we were unable to recover it. 00:34:13.480 [2024-07-26 01:16:43.632497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.480 [2024-07-26 01:16:43.632523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.480 qpair failed and we were unable to recover it. 00:34:13.480 [2024-07-26 01:16:43.632682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.480 [2024-07-26 01:16:43.632711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.480 qpair failed and we were unable to recover it. 00:34:13.480 [2024-07-26 01:16:43.632888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.480 [2024-07-26 01:16:43.632917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.480 qpair failed and we were unable to recover it. 00:34:13.480 [2024-07-26 01:16:43.633075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.480 [2024-07-26 01:16:43.633102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.480 qpair failed and we were unable to recover it. 00:34:13.480 [2024-07-26 01:16:43.633205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.480 [2024-07-26 01:16:43.633231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.480 qpair failed and we were unable to recover it. 00:34:13.480 [2024-07-26 01:16:43.633398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.480 [2024-07-26 01:16:43.633429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.480 qpair failed and we were unable to recover it. 00:34:13.480 [2024-07-26 01:16:43.633660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.480 [2024-07-26 01:16:43.633714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.480 qpair failed and we were unable to recover it. 00:34:13.480 [2024-07-26 01:16:43.633839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.480 [2024-07-26 01:16:43.633882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.480 qpair failed and we were unable to recover it. 00:34:13.480 [2024-07-26 01:16:43.634033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.480 [2024-07-26 01:16:43.634065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.480 qpair failed and we were unable to recover it. 00:34:13.480 [2024-07-26 01:16:43.634174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.480 [2024-07-26 01:16:43.634201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.480 qpair failed and we were unable to recover it. 00:34:13.480 [2024-07-26 01:16:43.634327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.480 [2024-07-26 01:16:43.634353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.480 qpair failed and we were unable to recover it. 00:34:13.480 [2024-07-26 01:16:43.634495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.480 [2024-07-26 01:16:43.634522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.480 qpair failed and we were unable to recover it. 00:34:13.480 [2024-07-26 01:16:43.634671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.480 [2024-07-26 01:16:43.634697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.480 qpair failed and we were unable to recover it. 00:34:13.480 [2024-07-26 01:16:43.634903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.480 [2024-07-26 01:16:43.634959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.480 qpair failed and we were unable to recover it. 00:34:13.480 [2024-07-26 01:16:43.635081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.480 [2024-07-26 01:16:43.635111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.480 qpair failed and we were unable to recover it. 00:34:13.480 [2024-07-26 01:16:43.635269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.480 [2024-07-26 01:16:43.635296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.480 qpair failed and we were unable to recover it. 00:34:13.480 [2024-07-26 01:16:43.635420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.480 [2024-07-26 01:16:43.635450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.480 qpair failed and we were unable to recover it. 00:34:13.480 [2024-07-26 01:16:43.635650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.480 [2024-07-26 01:16:43.635694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.480 qpair failed and we were unable to recover it. 00:34:13.480 [2024-07-26 01:16:43.635878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.480 [2024-07-26 01:16:43.635934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.480 qpair failed and we were unable to recover it. 00:34:13.480 [2024-07-26 01:16:43.636072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.480 [2024-07-26 01:16:43.636099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.480 qpair failed and we were unable to recover it. 00:34:13.480 [2024-07-26 01:16:43.636253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.480 [2024-07-26 01:16:43.636298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.480 qpair failed and we were unable to recover it. 00:34:13.480 [2024-07-26 01:16:43.636480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.480 [2024-07-26 01:16:43.636524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.480 qpair failed and we were unable to recover it. 00:34:13.480 [2024-07-26 01:16:43.636731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.480 [2024-07-26 01:16:43.636790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.480 qpair failed and we were unable to recover it. 00:34:13.480 [2024-07-26 01:16:43.636958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.480 [2024-07-26 01:16:43.636986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.480 qpair failed and we were unable to recover it. 00:34:13.480 [2024-07-26 01:16:43.637168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.480 [2024-07-26 01:16:43.637213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.480 qpair failed and we were unable to recover it. 00:34:13.481 [2024-07-26 01:16:43.637367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.481 [2024-07-26 01:16:43.637411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.481 qpair failed and we were unable to recover it. 00:34:13.481 [2024-07-26 01:16:43.637557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.481 [2024-07-26 01:16:43.637601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.481 qpair failed and we were unable to recover it. 00:34:13.481 [2024-07-26 01:16:43.637836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.481 [2024-07-26 01:16:43.637888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.481 qpair failed and we were unable to recover it. 00:34:13.481 [2024-07-26 01:16:43.638035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.481 [2024-07-26 01:16:43.638066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.481 qpair failed and we were unable to recover it. 00:34:13.481 [2024-07-26 01:16:43.638218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.481 [2024-07-26 01:16:43.638268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.481 qpair failed and we were unable to recover it. 00:34:13.481 [2024-07-26 01:16:43.638396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.481 [2024-07-26 01:16:43.638442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.481 qpair failed and we were unable to recover it. 00:34:13.481 [2024-07-26 01:16:43.638572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.481 [2024-07-26 01:16:43.638617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.481 qpair failed and we were unable to recover it. 00:34:13.481 [2024-07-26 01:16:43.638785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.481 [2024-07-26 01:16:43.638812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.481 qpair failed and we were unable to recover it. 00:34:13.481 [2024-07-26 01:16:43.638972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.481 [2024-07-26 01:16:43.638999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.481 qpair failed and we were unable to recover it. 00:34:13.481 [2024-07-26 01:16:43.639110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.481 [2024-07-26 01:16:43.639137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.481 qpair failed and we were unable to recover it. 00:34:13.481 [2024-07-26 01:16:43.639258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.481 [2024-07-26 01:16:43.639302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.481 qpair failed and we were unable to recover it. 00:34:13.481 [2024-07-26 01:16:43.639439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.481 [2024-07-26 01:16:43.639469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.481 qpair failed and we were unable to recover it. 00:34:13.481 [2024-07-26 01:16:43.639628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.481 [2024-07-26 01:16:43.639657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.481 qpair failed and we were unable to recover it. 00:34:13.481 [2024-07-26 01:16:43.639776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.481 [2024-07-26 01:16:43.639803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.481 qpair failed and we were unable to recover it. 00:34:13.481 [2024-07-26 01:16:43.639937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.481 [2024-07-26 01:16:43.639965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.481 qpair failed and we were unable to recover it. 00:34:13.481 [2024-07-26 01:16:43.640101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.481 [2024-07-26 01:16:43.640129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.481 qpair failed and we were unable to recover it. 00:34:13.481 [2024-07-26 01:16:43.640265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.481 [2024-07-26 01:16:43.640296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.481 qpair failed and we were unable to recover it. 00:34:13.481 [2024-07-26 01:16:43.640408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.481 [2024-07-26 01:16:43.640436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.481 qpair failed and we were unable to recover it. 00:34:13.481 [2024-07-26 01:16:43.640535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.481 [2024-07-26 01:16:43.640562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.481 qpair failed and we were unable to recover it. 00:34:13.481 [2024-07-26 01:16:43.640697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.481 [2024-07-26 01:16:43.640723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.481 qpair failed and we were unable to recover it. 00:34:13.481 [2024-07-26 01:16:43.641440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.481 [2024-07-26 01:16:43.641471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.481 qpair failed and we were unable to recover it. 00:34:13.481 [2024-07-26 01:16:43.641648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.481 [2024-07-26 01:16:43.641675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.481 qpair failed and we were unable to recover it. 00:34:13.481 [2024-07-26 01:16:43.641788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.481 [2024-07-26 01:16:43.641816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.481 qpair failed and we were unable to recover it. 00:34:13.481 [2024-07-26 01:16:43.641982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.481 [2024-07-26 01:16:43.642009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.481 qpair failed and we were unable to recover it. 00:34:13.481 [2024-07-26 01:16:43.642168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.481 [2024-07-26 01:16:43.642213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.481 qpair failed and we were unable to recover it. 00:34:13.481 [2024-07-26 01:16:43.642370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.481 [2024-07-26 01:16:43.642415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.481 qpair failed and we were unable to recover it. 00:34:13.481 [2024-07-26 01:16:43.642576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.481 [2024-07-26 01:16:43.642622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.481 qpair failed and we were unable to recover it. 00:34:13.481 [2024-07-26 01:16:43.642757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.481 [2024-07-26 01:16:43.642785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.481 qpair failed and we were unable to recover it. 00:34:13.481 [2024-07-26 01:16:43.643516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.481 [2024-07-26 01:16:43.643547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.481 qpair failed and we were unable to recover it. 00:34:13.481 [2024-07-26 01:16:43.643693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.481 [2024-07-26 01:16:43.643720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.481 qpair failed and we were unable to recover it. 00:34:13.481 [2024-07-26 01:16:43.644450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.481 [2024-07-26 01:16:43.644481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.481 qpair failed and we were unable to recover it. 00:34:13.481 [2024-07-26 01:16:43.644700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.481 [2024-07-26 01:16:43.644751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.481 qpair failed and we were unable to recover it. 00:34:13.481 [2024-07-26 01:16:43.645412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.481 [2024-07-26 01:16:43.645442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.482 qpair failed and we were unable to recover it. 00:34:13.482 [2024-07-26 01:16:43.645631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.482 [2024-07-26 01:16:43.645659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.482 qpair failed and we were unable to recover it. 00:34:13.482 [2024-07-26 01:16:43.645825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.482 [2024-07-26 01:16:43.645852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.482 qpair failed and we were unable to recover it. 00:34:13.482 [2024-07-26 01:16:43.645983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.482 [2024-07-26 01:16:43.646010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.482 qpair failed and we were unable to recover it. 00:34:13.482 [2024-07-26 01:16:43.646185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.482 [2024-07-26 01:16:43.646231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.482 qpair failed and we were unable to recover it. 00:34:13.482 [2024-07-26 01:16:43.646389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.482 [2024-07-26 01:16:43.646434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.482 qpair failed and we were unable to recover it. 00:34:13.482 [2024-07-26 01:16:43.646597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.482 [2024-07-26 01:16:43.646643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.482 qpair failed and we were unable to recover it. 00:34:13.482 [2024-07-26 01:16:43.646803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.482 [2024-07-26 01:16:43.646831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.482 qpair failed and we were unable to recover it. 00:34:13.482 [2024-07-26 01:16:43.646967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.482 [2024-07-26 01:16:43.646994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.482 qpair failed and we were unable to recover it. 00:34:13.482 [2024-07-26 01:16:43.647149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.482 [2024-07-26 01:16:43.647195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.482 qpair failed and we were unable to recover it. 00:34:13.482 [2024-07-26 01:16:43.647356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.482 [2024-07-26 01:16:43.647401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.482 qpair failed and we were unable to recover it. 00:34:13.482 [2024-07-26 01:16:43.647550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.482 [2024-07-26 01:16:43.647592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.482 qpair failed and we were unable to recover it. 00:34:13.482 [2024-07-26 01:16:43.647751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.482 [2024-07-26 01:16:43.647783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.482 qpair failed and we were unable to recover it. 00:34:13.482 [2024-07-26 01:16:43.647965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.482 [2024-07-26 01:16:43.647998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.482 qpair failed and we were unable to recover it. 00:34:13.482 [2024-07-26 01:16:43.648169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.482 [2024-07-26 01:16:43.648200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.482 qpair failed and we were unable to recover it. 00:34:13.482 [2024-07-26 01:16:43.648357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.482 [2024-07-26 01:16:43.648387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.482 qpair failed and we were unable to recover it. 00:34:13.482 [2024-07-26 01:16:43.648533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.482 [2024-07-26 01:16:43.648567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.482 qpair failed and we were unable to recover it. 00:34:13.482 [2024-07-26 01:16:43.648748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.482 [2024-07-26 01:16:43.648778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.482 qpair failed and we were unable to recover it. 00:34:13.482 [2024-07-26 01:16:43.648901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.482 [2024-07-26 01:16:43.648931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.482 qpair failed and we were unable to recover it. 00:34:13.482 [2024-07-26 01:16:43.649079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.482 [2024-07-26 01:16:43.649130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.482 qpair failed and we were unable to recover it. 00:34:13.482 [2024-07-26 01:16:43.649261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.482 [2024-07-26 01:16:43.649292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.482 qpair failed and we were unable to recover it. 00:34:13.482 [2024-07-26 01:16:43.649453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.482 [2024-07-26 01:16:43.649485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.482 qpair failed and we were unable to recover it. 00:34:13.482 [2024-07-26 01:16:43.649614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.482 [2024-07-26 01:16:43.649644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.482 qpair failed and we were unable to recover it. 00:34:13.482 [2024-07-26 01:16:43.649786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.482 [2024-07-26 01:16:43.649815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.482 qpair failed and we were unable to recover it. 00:34:13.482 [2024-07-26 01:16:43.649945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.482 [2024-07-26 01:16:43.649980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.482 qpair failed and we were unable to recover it. 00:34:13.482 [2024-07-26 01:16:43.650188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.482 [2024-07-26 01:16:43.650218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.482 qpair failed and we were unable to recover it. 00:34:13.482 [2024-07-26 01:16:43.650334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.482 [2024-07-26 01:16:43.650361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.482 qpair failed and we were unable to recover it. 00:34:13.482 [2024-07-26 01:16:43.650467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.482 [2024-07-26 01:16:43.650494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.482 qpair failed and we were unable to recover it. 00:34:13.482 [2024-07-26 01:16:43.650661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.482 [2024-07-26 01:16:43.650692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.482 qpair failed and we were unable to recover it. 00:34:13.482 [2024-07-26 01:16:43.650899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.482 [2024-07-26 01:16:43.650929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.482 qpair failed and we were unable to recover it. 00:34:13.482 [2024-07-26 01:16:43.651081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.482 [2024-07-26 01:16:43.651135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.482 qpair failed and we were unable to recover it. 00:34:13.482 [2024-07-26 01:16:43.651298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.482 [2024-07-26 01:16:43.651342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.482 qpair failed and we were unable to recover it. 00:34:13.482 [2024-07-26 01:16:43.651554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.482 [2024-07-26 01:16:43.651585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.482 qpair failed and we were unable to recover it. 00:34:13.482 [2024-07-26 01:16:43.651734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.482 [2024-07-26 01:16:43.651763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.482 qpair failed and we were unable to recover it. 00:34:13.482 [2024-07-26 01:16:43.651942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.482 [2024-07-26 01:16:43.651973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.482 qpair failed and we were unable to recover it. 00:34:13.482 [2024-07-26 01:16:43.652141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.482 [2024-07-26 01:16:43.652168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.482 qpair failed and we were unable to recover it. 00:34:13.482 [2024-07-26 01:16:43.652274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.482 [2024-07-26 01:16:43.652301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.482 qpair failed and we were unable to recover it. 00:34:13.482 [2024-07-26 01:16:43.652435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.482 [2024-07-26 01:16:43.652464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.482 qpair failed and we were unable to recover it. 00:34:13.482 [2024-07-26 01:16:43.652623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.483 [2024-07-26 01:16:43.652653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.483 qpair failed and we were unable to recover it. 00:34:13.483 [2024-07-26 01:16:43.652831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.483 [2024-07-26 01:16:43.652863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.483 qpair failed and we were unable to recover it. 00:34:13.483 [2024-07-26 01:16:43.653003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.483 [2024-07-26 01:16:43.653030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.483 qpair failed and we were unable to recover it. 00:34:13.483 [2024-07-26 01:16:43.653144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.483 [2024-07-26 01:16:43.653172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.483 qpair failed and we were unable to recover it. 00:34:13.483 [2024-07-26 01:16:43.653281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.483 [2024-07-26 01:16:43.653309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.483 qpair failed and we were unable to recover it. 00:34:13.483 [2024-07-26 01:16:43.653438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.483 [2024-07-26 01:16:43.653471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.483 qpair failed and we were unable to recover it. 00:34:13.483 [2024-07-26 01:16:43.653653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.483 [2024-07-26 01:16:43.653683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.483 qpair failed and we were unable to recover it. 00:34:13.483 [2024-07-26 01:16:43.653832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.483 [2024-07-26 01:16:43.653862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.483 qpair failed and we were unable to recover it. 00:34:13.483 [2024-07-26 01:16:43.653993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.483 [2024-07-26 01:16:43.654020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.483 qpair failed and we were unable to recover it. 00:34:13.483 [2024-07-26 01:16:43.654141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.483 [2024-07-26 01:16:43.654169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.483 qpair failed and we were unable to recover it. 00:34:13.483 [2024-07-26 01:16:43.654290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.483 [2024-07-26 01:16:43.654318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.483 qpair failed and we were unable to recover it. 00:34:13.483 [2024-07-26 01:16:43.657976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.483 [2024-07-26 01:16:43.658015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.483 qpair failed and we were unable to recover it. 00:34:13.483 [2024-07-26 01:16:43.658210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.483 [2024-07-26 01:16:43.658240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.483 qpair failed and we were unable to recover it. 00:34:13.483 [2024-07-26 01:16:43.658374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.483 [2024-07-26 01:16:43.658433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.483 qpair failed and we were unable to recover it. 00:34:13.483 [2024-07-26 01:16:43.658588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.483 [2024-07-26 01:16:43.658618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.483 qpair failed and we were unable to recover it. 00:34:13.483 [2024-07-26 01:16:43.658839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.483 [2024-07-26 01:16:43.658868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.483 qpair failed and we were unable to recover it. 00:34:13.483 [2024-07-26 01:16:43.659032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.483 [2024-07-26 01:16:43.659071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.483 qpair failed and we were unable to recover it. 00:34:13.483 [2024-07-26 01:16:43.659207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.483 [2024-07-26 01:16:43.659233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.483 qpair failed and we were unable to recover it. 00:34:13.483 [2024-07-26 01:16:43.659342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.483 [2024-07-26 01:16:43.659399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.483 qpair failed and we were unable to recover it. 00:34:13.483 [2024-07-26 01:16:43.659553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.483 [2024-07-26 01:16:43.659582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.483 qpair failed and we were unable to recover it. 00:34:13.483 [2024-07-26 01:16:43.659745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.483 [2024-07-26 01:16:43.659775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.483 qpair failed and we were unable to recover it. 00:34:13.483 [2024-07-26 01:16:43.659884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.483 [2024-07-26 01:16:43.659913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.483 qpair failed and we were unable to recover it. 00:34:13.483 [2024-07-26 01:16:43.660072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.483 [2024-07-26 01:16:43.660099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.483 qpair failed and we were unable to recover it. 00:34:13.483 [2024-07-26 01:16:43.660215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.483 [2024-07-26 01:16:43.660240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.483 qpair failed and we were unable to recover it. 00:34:13.483 [2024-07-26 01:16:43.660347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.483 [2024-07-26 01:16:43.660374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.483 qpair failed and we were unable to recover it. 00:34:13.483 [2024-07-26 01:16:43.660535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.483 [2024-07-26 01:16:43.660561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.483 qpair failed and we were unable to recover it. 00:34:13.483 [2024-07-26 01:16:43.660665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.483 [2024-07-26 01:16:43.660708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.483 qpair failed and we were unable to recover it. 00:34:13.483 [2024-07-26 01:16:43.660832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.483 [2024-07-26 01:16:43.660861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.483 qpair failed and we were unable to recover it. 00:34:13.483 [2024-07-26 01:16:43.661017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.483 [2024-07-26 01:16:43.661044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.483 qpair failed and we were unable to recover it. 00:34:13.483 [2024-07-26 01:16:43.661175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.483 [2024-07-26 01:16:43.661201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.483 qpair failed and we were unable to recover it. 00:34:13.483 [2024-07-26 01:16:43.661313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.483 [2024-07-26 01:16:43.661355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.483 qpair failed and we were unable to recover it. 00:34:13.483 [2024-07-26 01:16:43.661480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.483 [2024-07-26 01:16:43.661507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.483 qpair failed and we were unable to recover it. 00:34:13.483 [2024-07-26 01:16:43.661635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.483 [2024-07-26 01:16:43.661661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.483 qpair failed and we were unable to recover it. 00:34:13.483 [2024-07-26 01:16:43.661801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.483 [2024-07-26 01:16:43.661830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.483 qpair failed and we were unable to recover it. 00:34:13.483 [2024-07-26 01:16:43.661958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.483 [2024-07-26 01:16:43.662000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.483 qpair failed and we were unable to recover it. 00:34:13.483 [2024-07-26 01:16:43.662129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.483 [2024-07-26 01:16:43.662156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.483 qpair failed and we were unable to recover it. 00:34:13.483 [2024-07-26 01:16:43.662271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.483 [2024-07-26 01:16:43.662298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.483 qpair failed and we were unable to recover it. 00:34:13.483 [2024-07-26 01:16:43.662455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.483 [2024-07-26 01:16:43.662482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.483 qpair failed and we were unable to recover it. 00:34:13.483 [2024-07-26 01:16:43.662643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.484 [2024-07-26 01:16:43.662690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.484 qpair failed and we were unable to recover it. 00:34:13.484 [2024-07-26 01:16:43.662812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.484 [2024-07-26 01:16:43.662840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.484 qpair failed and we were unable to recover it. 00:34:13.484 [2024-07-26 01:16:43.662976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.484 [2024-07-26 01:16:43.663006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.484 qpair failed and we were unable to recover it. 00:34:13.484 [2024-07-26 01:16:43.663120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.484 [2024-07-26 01:16:43.663147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.484 qpair failed and we were unable to recover it. 00:34:13.484 [2024-07-26 01:16:43.663290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.484 [2024-07-26 01:16:43.663316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.484 qpair failed and we were unable to recover it. 00:34:13.484 [2024-07-26 01:16:43.663505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.484 [2024-07-26 01:16:43.663536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.484 qpair failed and we were unable to recover it. 00:34:13.484 [2024-07-26 01:16:43.663679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.484 [2024-07-26 01:16:43.663706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.484 qpair failed and we were unable to recover it. 00:34:13.484 [2024-07-26 01:16:43.663861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.484 [2024-07-26 01:16:43.663889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.484 qpair failed and we were unable to recover it. 00:34:13.484 [2024-07-26 01:16:43.664041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.484 [2024-07-26 01:16:43.664072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.484 qpair failed and we were unable to recover it. 00:34:13.484 [2024-07-26 01:16:43.664207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.484 [2024-07-26 01:16:43.664232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.484 qpair failed and we were unable to recover it. 00:34:13.484 [2024-07-26 01:16:43.664374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.484 [2024-07-26 01:16:43.664402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.484 qpair failed and we were unable to recover it. 00:34:13.484 [2024-07-26 01:16:43.664575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.484 [2024-07-26 01:16:43.664604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.484 qpair failed and we were unable to recover it. 00:34:13.484 [2024-07-26 01:16:43.664757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.484 [2024-07-26 01:16:43.664785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.484 qpair failed and we were unable to recover it. 00:34:13.484 [2024-07-26 01:16:43.664924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.484 [2024-07-26 01:16:43.664952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.484 qpair failed and we were unable to recover it. 00:34:13.484 [2024-07-26 01:16:43.665116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.484 [2024-07-26 01:16:43.665143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.484 qpair failed and we were unable to recover it. 00:34:13.484 [2024-07-26 01:16:43.665252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.484 [2024-07-26 01:16:43.665278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.484 qpair failed and we were unable to recover it. 00:34:13.484 [2024-07-26 01:16:43.665435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.484 [2024-07-26 01:16:43.665463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.484 qpair failed and we were unable to recover it. 00:34:13.484 [2024-07-26 01:16:43.665599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.484 [2024-07-26 01:16:43.665627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.484 qpair failed and we were unable to recover it. 00:34:13.484 [2024-07-26 01:16:43.665801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.484 [2024-07-26 01:16:43.665829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.484 qpair failed and we were unable to recover it. 00:34:13.484 [2024-07-26 01:16:43.665938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.484 [2024-07-26 01:16:43.665980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.484 qpair failed and we were unable to recover it. 00:34:13.484 [2024-07-26 01:16:43.666119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.484 [2024-07-26 01:16:43.666145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.484 qpair failed and we were unable to recover it. 00:34:13.484 [2024-07-26 01:16:43.666240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.484 [2024-07-26 01:16:43.666265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.484 qpair failed and we were unable to recover it. 00:34:13.484 [2024-07-26 01:16:43.666385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.484 [2024-07-26 01:16:43.666413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.484 qpair failed and we were unable to recover it. 00:34:13.484 [2024-07-26 01:16:43.666531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.484 [2024-07-26 01:16:43.666559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.484 qpair failed and we were unable to recover it. 00:34:13.484 [2024-07-26 01:16:43.666737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.484 [2024-07-26 01:16:43.666765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.484 qpair failed and we were unable to recover it. 00:34:13.484 [2024-07-26 01:16:43.666877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.484 [2024-07-26 01:16:43.666905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.484 qpair failed and we were unable to recover it. 00:34:13.484 [2024-07-26 01:16:43.667046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.484 [2024-07-26 01:16:43.667081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.484 qpair failed and we were unable to recover it. 00:34:13.484 [2024-07-26 01:16:43.667212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.484 [2024-07-26 01:16:43.667237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.484 qpair failed and we were unable to recover it. 00:34:13.484 [2024-07-26 01:16:43.667344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.484 [2024-07-26 01:16:43.667369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.484 qpair failed and we were unable to recover it. 00:34:13.484 [2024-07-26 01:16:43.667533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.484 [2024-07-26 01:16:43.667559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.484 qpair failed and we were unable to recover it. 00:34:13.484 [2024-07-26 01:16:43.667733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.484 [2024-07-26 01:16:43.667761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.484 qpair failed and we were unable to recover it. 00:34:13.484 [2024-07-26 01:16:43.667911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.484 [2024-07-26 01:16:43.667938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.484 qpair failed and we were unable to recover it. 00:34:13.484 [2024-07-26 01:16:43.668103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.484 [2024-07-26 01:16:43.668128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.484 qpair failed and we were unable to recover it. 00:34:13.484 [2024-07-26 01:16:43.668242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.484 [2024-07-26 01:16:43.668267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.484 qpair failed and we were unable to recover it. 00:34:13.484 [2024-07-26 01:16:43.668366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.484 [2024-07-26 01:16:43.668391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.484 qpair failed and we were unable to recover it. 00:34:13.484 [2024-07-26 01:16:43.668563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.484 [2024-07-26 01:16:43.668591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.484 qpair failed and we were unable to recover it. 00:34:13.484 [2024-07-26 01:16:43.668746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.484 [2024-07-26 01:16:43.668771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.484 qpair failed and we were unable to recover it. 00:34:13.485 [2024-07-26 01:16:43.668885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.485 [2024-07-26 01:16:43.668910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.485 qpair failed and we were unable to recover it. 00:34:13.485 [2024-07-26 01:16:43.669096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.485 [2024-07-26 01:16:43.669122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.485 qpair failed and we were unable to recover it. 00:34:13.485 [2024-07-26 01:16:43.669256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.485 [2024-07-26 01:16:43.669281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.485 qpair failed and we were unable to recover it. 00:34:13.485 [2024-07-26 01:16:43.669388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.485 [2024-07-26 01:16:43.669412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.485 qpair failed and we were unable to recover it. 00:34:13.485 [2024-07-26 01:16:43.669523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.485 [2024-07-26 01:16:43.669547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.485 qpair failed and we were unable to recover it. 00:34:13.485 [2024-07-26 01:16:43.669685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.485 [2024-07-26 01:16:43.669710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.485 qpair failed and we were unable to recover it. 00:34:13.485 [2024-07-26 01:16:43.669828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.485 [2024-07-26 01:16:43.669864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.485 qpair failed and we were unable to recover it. 00:34:13.485 [2024-07-26 01:16:43.669987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.485 [2024-07-26 01:16:43.670014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.485 qpair failed and we were unable to recover it. 00:34:13.485 [2024-07-26 01:16:43.670168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.485 [2024-07-26 01:16:43.670195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.485 qpair failed and we were unable to recover it. 00:34:13.485 [2024-07-26 01:16:43.670305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.485 [2024-07-26 01:16:43.670331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.485 qpair failed and we were unable to recover it. 00:34:13.485 [2024-07-26 01:16:43.670496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.485 [2024-07-26 01:16:43.670528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.485 qpair failed and we were unable to recover it. 00:34:13.485 [2024-07-26 01:16:43.670684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.485 [2024-07-26 01:16:43.670713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.485 qpair failed and we were unable to recover it. 00:34:13.485 [2024-07-26 01:16:43.670841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.485 [2024-07-26 01:16:43.670867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.485 qpair failed and we were unable to recover it. 00:34:13.485 [2024-07-26 01:16:43.671075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.485 [2024-07-26 01:16:43.671129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.485 qpair failed and we were unable to recover it. 00:34:13.485 [2024-07-26 01:16:43.671237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.485 [2024-07-26 01:16:43.671263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.485 qpair failed and we were unable to recover it. 00:34:13.485 [2024-07-26 01:16:43.672172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.485 [2024-07-26 01:16:43.672202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.485 qpair failed and we were unable to recover it. 00:34:13.485 [2024-07-26 01:16:43.672353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.485 [2024-07-26 01:16:43.672383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.485 qpair failed and we were unable to recover it. 00:34:13.485 [2024-07-26 01:16:43.672566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.485 [2024-07-26 01:16:43.672593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.485 qpair failed and we were unable to recover it. 00:34:13.485 [2024-07-26 01:16:43.672788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.485 [2024-07-26 01:16:43.672816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.485 qpair failed and we were unable to recover it. 00:34:13.485 [2024-07-26 01:16:43.672939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.485 [2024-07-26 01:16:43.672985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.485 qpair failed and we were unable to recover it. 00:34:13.485 [2024-07-26 01:16:43.673107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.485 [2024-07-26 01:16:43.673133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.485 qpair failed and we were unable to recover it. 00:34:13.485 [2024-07-26 01:16:43.673246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.485 [2024-07-26 01:16:43.673272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.485 qpair failed and we were unable to recover it. 00:34:13.485 [2024-07-26 01:16:43.673417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.485 [2024-07-26 01:16:43.673443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.485 qpair failed and we were unable to recover it. 00:34:13.485 [2024-07-26 01:16:43.673605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.485 [2024-07-26 01:16:43.673631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.485 qpair failed and we were unable to recover it. 00:34:13.485 [2024-07-26 01:16:43.673748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.485 [2024-07-26 01:16:43.673774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.485 qpair failed and we were unable to recover it. 00:34:13.485 [2024-07-26 01:16:43.673925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.485 [2024-07-26 01:16:43.673952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.485 qpair failed and we were unable to recover it. 00:34:13.485 [2024-07-26 01:16:43.674121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.485 [2024-07-26 01:16:43.674149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.485 qpair failed and we were unable to recover it. 00:34:13.485 [2024-07-26 01:16:43.674295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.485 [2024-07-26 01:16:43.674321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.485 qpair failed and we were unable to recover it. 00:34:13.485 [2024-07-26 01:16:43.674485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.485 [2024-07-26 01:16:43.674516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.485 qpair failed and we were unable to recover it. 00:34:13.485 [2024-07-26 01:16:43.674671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.485 [2024-07-26 01:16:43.674697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.485 qpair failed and we were unable to recover it. 00:34:13.485 [2024-07-26 01:16:43.674834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.485 [2024-07-26 01:16:43.674881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.485 qpair failed and we were unable to recover it. 00:34:13.485 [2024-07-26 01:16:43.675044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.486 [2024-07-26 01:16:43.675084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.486 qpair failed and we were unable to recover it. 00:34:13.486 [2024-07-26 01:16:43.675218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.486 [2024-07-26 01:16:43.675245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.486 qpair failed and we were unable to recover it. 00:34:13.486 [2024-07-26 01:16:43.675363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.486 [2024-07-26 01:16:43.675389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.486 qpair failed and we were unable to recover it. 00:34:13.486 [2024-07-26 01:16:43.675611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.486 [2024-07-26 01:16:43.675637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.486 qpair failed and we were unable to recover it. 00:34:13.486 [2024-07-26 01:16:43.675785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.486 [2024-07-26 01:16:43.675811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.486 qpair failed and we were unable to recover it. 00:34:13.486 [2024-07-26 01:16:43.675947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.486 [2024-07-26 01:16:43.675973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.486 qpair failed and we were unable to recover it. 00:34:13.486 [2024-07-26 01:16:43.676117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.486 [2024-07-26 01:16:43.676145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.486 qpair failed and we were unable to recover it. 00:34:13.486 [2024-07-26 01:16:43.676255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.486 [2024-07-26 01:16:43.676281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.486 qpair failed and we were unable to recover it. 00:34:13.486 [2024-07-26 01:16:43.676386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.486 [2024-07-26 01:16:43.676414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.486 qpair failed and we were unable to recover it. 00:34:13.486 [2024-07-26 01:16:43.676560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.486 [2024-07-26 01:16:43.676586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.486 qpair failed and we were unable to recover it. 00:34:13.486 [2024-07-26 01:16:43.676698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.486 [2024-07-26 01:16:43.676725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.486 qpair failed and we were unable to recover it. 00:34:13.486 [2024-07-26 01:16:43.676836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.486 [2024-07-26 01:16:43.676862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.486 qpair failed and we were unable to recover it. 00:34:13.486 [2024-07-26 01:16:43.677010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.486 [2024-07-26 01:16:43.677039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.486 qpair failed and we were unable to recover it. 00:34:13.486 [2024-07-26 01:16:43.677192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.486 [2024-07-26 01:16:43.677219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.486 qpair failed and we were unable to recover it. 00:34:13.486 [2024-07-26 01:16:43.677328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.486 [2024-07-26 01:16:43.677355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.486 qpair failed and we were unable to recover it. 00:34:13.486 [2024-07-26 01:16:43.677523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.486 [2024-07-26 01:16:43.677552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.486 qpair failed and we were unable to recover it. 00:34:13.486 [2024-07-26 01:16:43.677676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.486 [2024-07-26 01:16:43.677701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.486 qpair failed and we were unable to recover it. 00:34:13.486 [2024-07-26 01:16:43.677843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.486 [2024-07-26 01:16:43.677870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.486 qpair failed and we were unable to recover it. 00:34:13.486 [2024-07-26 01:16:43.678035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.486 [2024-07-26 01:16:43.678073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.486 qpair failed and we were unable to recover it. 00:34:13.486 [2024-07-26 01:16:43.678231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.486 [2024-07-26 01:16:43.678258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.486 qpair failed and we were unable to recover it. 00:34:13.486 [2024-07-26 01:16:43.678385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.486 [2024-07-26 01:16:43.678412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.486 qpair failed and we were unable to recover it. 00:34:13.486 [2024-07-26 01:16:43.678554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.486 [2024-07-26 01:16:43.678580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.486 qpair failed and we were unable to recover it. 00:34:13.486 [2024-07-26 01:16:43.678723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.486 [2024-07-26 01:16:43.678750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.486 qpair failed and we were unable to recover it. 00:34:13.486 [2024-07-26 01:16:43.678855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.486 [2024-07-26 01:16:43.678896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.486 qpair failed and we were unable to recover it. 00:34:13.486 [2024-07-26 01:16:43.679082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.486 [2024-07-26 01:16:43.679133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.486 qpair failed and we were unable to recover it. 00:34:13.486 [2024-07-26 01:16:43.679243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.486 [2024-07-26 01:16:43.679270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.486 qpair failed and we were unable to recover it. 00:34:13.486 [2024-07-26 01:16:43.679418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.486 [2024-07-26 01:16:43.679460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.486 qpair failed and we were unable to recover it. 00:34:13.486 [2024-07-26 01:16:43.679639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.486 [2024-07-26 01:16:43.679668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.486 qpair failed and we were unable to recover it. 00:34:13.486 [2024-07-26 01:16:43.679793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.486 [2024-07-26 01:16:43.679820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.486 qpair failed and we were unable to recover it. 00:34:13.486 [2024-07-26 01:16:43.679988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.486 [2024-07-26 01:16:43.680014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.486 qpair failed and we were unable to recover it. 00:34:13.486 [2024-07-26 01:16:43.680203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.486 [2024-07-26 01:16:43.680231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.486 qpair failed and we were unable to recover it. 00:34:13.486 [2024-07-26 01:16:43.680391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.486 [2024-07-26 01:16:43.680417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.486 qpair failed and we were unable to recover it. 00:34:13.486 [2024-07-26 01:16:43.680521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.486 [2024-07-26 01:16:43.680568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.486 qpair failed and we were unable to recover it. 00:34:13.486 [2024-07-26 01:16:43.680725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.486 [2024-07-26 01:16:43.680754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.486 qpair failed and we were unable to recover it. 00:34:13.486 [2024-07-26 01:16:43.680914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.486 [2024-07-26 01:16:43.680939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.486 qpair failed and we were unable to recover it. 00:34:13.486 [2024-07-26 01:16:43.681051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.486 [2024-07-26 01:16:43.681100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.486 qpair failed and we were unable to recover it. 00:34:13.486 [2024-07-26 01:16:43.681245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.486 [2024-07-26 01:16:43.681275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.486 qpair failed and we were unable to recover it. 00:34:13.486 [2024-07-26 01:16:43.681439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.486 [2024-07-26 01:16:43.681468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.487 qpair failed and we were unable to recover it. 00:34:13.487 [2024-07-26 01:16:43.681631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.487 [2024-07-26 01:16:43.681660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.487 qpair failed and we were unable to recover it. 00:34:13.487 [2024-07-26 01:16:43.681849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.487 [2024-07-26 01:16:43.681876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.487 qpair failed and we were unable to recover it. 00:34:13.487 [2024-07-26 01:16:43.682009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.487 [2024-07-26 01:16:43.682035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.487 qpair failed and we were unable to recover it. 00:34:13.487 [2024-07-26 01:16:43.682198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.487 [2024-07-26 01:16:43.682227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.487 qpair failed and we were unable to recover it. 00:34:13.487 [2024-07-26 01:16:43.682389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.487 [2024-07-26 01:16:43.682427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.487 qpair failed and we were unable to recover it. 00:34:13.487 [2024-07-26 01:16:43.682556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.487 [2024-07-26 01:16:43.682583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.487 qpair failed and we were unable to recover it. 00:34:13.487 [2024-07-26 01:16:43.682751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.487 [2024-07-26 01:16:43.682794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.487 qpair failed and we were unable to recover it. 00:34:13.487 [2024-07-26 01:16:43.682906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.487 [2024-07-26 01:16:43.682937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.487 qpair failed and we were unable to recover it. 00:34:13.487 [2024-07-26 01:16:43.683086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.487 [2024-07-26 01:16:43.683123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.487 qpair failed and we were unable to recover it. 00:34:13.487 [2024-07-26 01:16:43.683260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.487 [2024-07-26 01:16:43.683287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.487 qpair failed and we were unable to recover it. 00:34:13.487 [2024-07-26 01:16:43.683436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.487 [2024-07-26 01:16:43.683478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.487 qpair failed and we were unable to recover it. 00:34:13.487 [2024-07-26 01:16:43.683612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.487 [2024-07-26 01:16:43.683637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.487 qpair failed and we were unable to recover it. 00:34:13.487 [2024-07-26 01:16:43.683772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.487 [2024-07-26 01:16:43.683815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.487 qpair failed and we were unable to recover it. 00:34:13.487 [2024-07-26 01:16:43.683991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.487 [2024-07-26 01:16:43.684018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.487 qpair failed and we were unable to recover it. 00:34:13.487 [2024-07-26 01:16:43.684174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.487 [2024-07-26 01:16:43.684200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.487 qpair failed and we were unable to recover it. 00:34:13.487 [2024-07-26 01:16:43.684362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.487 [2024-07-26 01:16:43.684398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.487 qpair failed and we were unable to recover it. 00:34:13.487 [2024-07-26 01:16:43.684541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.487 [2024-07-26 01:16:43.684570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.487 qpair failed and we were unable to recover it. 00:34:13.487 [2024-07-26 01:16:43.684729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.487 [2024-07-26 01:16:43.684760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.487 qpair failed and we were unable to recover it. 00:34:13.487 [2024-07-26 01:16:43.684927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.487 [2024-07-26 01:16:43.684956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.487 qpair failed and we were unable to recover it. 00:34:13.487 [2024-07-26 01:16:43.685134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.487 [2024-07-26 01:16:43.685166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.487 qpair failed and we were unable to recover it. 00:34:13.487 [2024-07-26 01:16:43.685311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.487 [2024-07-26 01:16:43.685337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.487 qpair failed and we were unable to recover it. 00:34:13.487 [2024-07-26 01:16:43.685451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.487 [2024-07-26 01:16:43.685479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.487 qpair failed and we were unable to recover it. 00:34:13.487 [2024-07-26 01:16:43.685644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.487 [2024-07-26 01:16:43.685670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.487 qpair failed and we were unable to recover it. 00:34:13.487 [2024-07-26 01:16:43.685779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.487 [2024-07-26 01:16:43.685805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.487 qpair failed and we were unable to recover it. 00:34:13.487 [2024-07-26 01:16:43.685913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.487 [2024-07-26 01:16:43.685939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.487 qpair failed and we were unable to recover it. 00:34:13.487 [2024-07-26 01:16:43.686124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.487 [2024-07-26 01:16:43.686153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.487 qpair failed and we were unable to recover it. 00:34:13.487 [2024-07-26 01:16:43.686276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.487 [2024-07-26 01:16:43.686303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.487 qpair failed and we were unable to recover it. 00:34:13.487 [2024-07-26 01:16:43.686412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.487 [2024-07-26 01:16:43.686438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.487 qpair failed and we were unable to recover it. 00:34:13.487 [2024-07-26 01:16:43.686581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.487 [2024-07-26 01:16:43.686611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.487 qpair failed and we were unable to recover it. 00:34:13.487 [2024-07-26 01:16:43.686768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.487 [2024-07-26 01:16:43.686794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.487 qpair failed and we were unable to recover it. 00:34:13.487 [2024-07-26 01:16:43.686930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.487 [2024-07-26 01:16:43.686956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.487 qpair failed and we were unable to recover it. 00:34:13.487 [2024-07-26 01:16:43.687155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.487 [2024-07-26 01:16:43.687182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.487 qpair failed and we were unable to recover it. 00:34:13.487 [2024-07-26 01:16:43.687343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.487 [2024-07-26 01:16:43.687369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.487 qpair failed and we were unable to recover it. 00:34:13.487 [2024-07-26 01:16:43.687520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.487 [2024-07-26 01:16:43.687551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.487 qpair failed and we were unable to recover it. 00:34:13.487 [2024-07-26 01:16:43.687692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.487 [2024-07-26 01:16:43.687720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.487 qpair failed and we were unable to recover it. 00:34:13.487 [2024-07-26 01:16:43.687874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.487 [2024-07-26 01:16:43.687900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.487 qpair failed and we were unable to recover it. 00:34:13.487 [2024-07-26 01:16:43.688063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.487 [2024-07-26 01:16:43.688106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.487 qpair failed and we were unable to recover it. 00:34:13.488 [2024-07-26 01:16:43.688258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.488 [2024-07-26 01:16:43.688287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.488 qpair failed and we were unable to recover it. 00:34:13.488 [2024-07-26 01:16:43.688456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.488 [2024-07-26 01:16:43.688482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.488 qpair failed and we were unable to recover it. 00:34:13.488 [2024-07-26 01:16:43.688645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.488 [2024-07-26 01:16:43.688688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.488 qpair failed and we were unable to recover it. 00:34:13.488 [2024-07-26 01:16:43.688841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.488 [2024-07-26 01:16:43.688871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.488 qpair failed and we were unable to recover it. 00:34:13.488 [2024-07-26 01:16:43.689034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.488 [2024-07-26 01:16:43.689065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.488 qpair failed and we were unable to recover it. 00:34:13.488 [2024-07-26 01:16:43.689208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.488 [2024-07-26 01:16:43.689234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.488 qpair failed and we were unable to recover it. 00:34:13.488 [2024-07-26 01:16:43.689400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.488 [2024-07-26 01:16:43.689429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.488 qpair failed and we were unable to recover it. 00:34:13.488 [2024-07-26 01:16:43.689574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.488 [2024-07-26 01:16:43.689601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.488 qpair failed and we were unable to recover it. 00:34:13.488 [2024-07-26 01:16:43.689748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.488 [2024-07-26 01:16:43.689775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.488 qpair failed and we were unable to recover it. 00:34:13.488 [2024-07-26 01:16:43.689944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.488 [2024-07-26 01:16:43.689970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.488 qpair failed and we were unable to recover it. 00:34:13.488 [2024-07-26 01:16:43.690134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.488 [2024-07-26 01:16:43.690160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.488 qpair failed and we were unable to recover it. 00:34:13.488 [2024-07-26 01:16:43.690317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.488 [2024-07-26 01:16:43.690362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.488 qpair failed and we were unable to recover it. 00:34:13.488 [2024-07-26 01:16:43.690503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.488 [2024-07-26 01:16:43.690530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.488 qpair failed and we were unable to recover it. 00:34:13.488 [2024-07-26 01:16:43.690665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.488 [2024-07-26 01:16:43.690691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.488 qpair failed and we were unable to recover it. 00:34:13.488 [2024-07-26 01:16:43.690825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.488 [2024-07-26 01:16:43.690872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.488 qpair failed and we were unable to recover it. 00:34:13.488 [2024-07-26 01:16:43.691057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.488 [2024-07-26 01:16:43.691093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.488 qpair failed and we were unable to recover it. 00:34:13.488 [2024-07-26 01:16:43.691223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.488 [2024-07-26 01:16:43.691248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.488 qpair failed and we were unable to recover it. 00:34:13.488 [2024-07-26 01:16:43.691365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.488 [2024-07-26 01:16:43.691391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.488 qpair failed and we were unable to recover it. 00:34:13.488 [2024-07-26 01:16:43.691586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.488 [2024-07-26 01:16:43.691618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.488 qpair failed and we were unable to recover it. 00:34:13.488 [2024-07-26 01:16:43.691781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.488 [2024-07-26 01:16:43.691807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.488 qpair failed and we were unable to recover it. 00:34:13.488 [2024-07-26 01:16:43.691937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.488 [2024-07-26 01:16:43.691968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.488 qpair failed and we were unable to recover it. 00:34:13.488 [2024-07-26 01:16:43.692121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.488 [2024-07-26 01:16:43.692150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.488 qpair failed and we were unable to recover it. 00:34:13.488 [2024-07-26 01:16:43.692308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.488 [2024-07-26 01:16:43.692334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.488 qpair failed and we were unable to recover it. 00:34:13.488 [2024-07-26 01:16:43.692469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.488 [2024-07-26 01:16:43.692498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.488 qpair failed and we were unable to recover it. 00:34:13.488 [2024-07-26 01:16:43.692643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.488 [2024-07-26 01:16:43.692673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.488 qpair failed and we were unable to recover it. 00:34:13.488 [2024-07-26 01:16:43.692825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.488 [2024-07-26 01:16:43.692851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.488 qpair failed and we were unable to recover it. 00:34:13.488 [2024-07-26 01:16:43.692979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.488 [2024-07-26 01:16:43.693024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.488 qpair failed and we were unable to recover it. 00:34:13.488 [2024-07-26 01:16:43.693211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.488 [2024-07-26 01:16:43.693241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.488 qpair failed and we were unable to recover it. 00:34:13.488 [2024-07-26 01:16:43.693369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.488 [2024-07-26 01:16:43.693396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.488 qpair failed and we were unable to recover it. 00:34:13.488 [2024-07-26 01:16:43.693502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.488 [2024-07-26 01:16:43.693528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.488 qpair failed and we were unable to recover it. 00:34:13.488 [2024-07-26 01:16:43.693666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.488 [2024-07-26 01:16:43.693692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.488 qpair failed and we were unable to recover it. 00:34:13.488 [2024-07-26 01:16:43.693831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.488 [2024-07-26 01:16:43.693858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.488 qpair failed and we were unable to recover it. 00:34:13.488 [2024-07-26 01:16:43.693969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.488 [2024-07-26 01:16:43.694010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.488 qpair failed and we were unable to recover it. 00:34:13.488 [2024-07-26 01:16:43.694162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.488 [2024-07-26 01:16:43.694189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.488 qpair failed and we were unable to recover it. 00:34:13.488 [2024-07-26 01:16:43.694355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.488 [2024-07-26 01:16:43.694382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.488 qpair failed and we were unable to recover it. 00:34:13.488 [2024-07-26 01:16:43.694532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.488 [2024-07-26 01:16:43.694563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.488 qpair failed and we were unable to recover it. 00:34:13.488 [2024-07-26 01:16:43.694721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.488 [2024-07-26 01:16:43.694752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.488 qpair failed and we were unable to recover it. 00:34:13.488 [2024-07-26 01:16:43.694887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.489 [2024-07-26 01:16:43.694912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.489 qpair failed and we were unable to recover it. 00:34:13.489 [2024-07-26 01:16:43.695050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.489 [2024-07-26 01:16:43.695084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.489 qpair failed and we were unable to recover it. 00:34:13.489 [2024-07-26 01:16:43.695235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.489 [2024-07-26 01:16:43.695279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.489 qpair failed and we were unable to recover it. 00:34:13.489 [2024-07-26 01:16:43.695466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.489 [2024-07-26 01:16:43.695492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.489 qpair failed and we were unable to recover it. 00:34:13.489 [2024-07-26 01:16:43.695672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.489 [2024-07-26 01:16:43.695701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.489 qpair failed and we were unable to recover it. 00:34:13.489 [2024-07-26 01:16:43.695845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.489 [2024-07-26 01:16:43.695877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.489 qpair failed and we were unable to recover it. 00:34:13.489 [2024-07-26 01:16:43.696030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.489 [2024-07-26 01:16:43.696066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.489 qpair failed and we were unable to recover it. 00:34:13.489 [2024-07-26 01:16:43.696189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.489 [2024-07-26 01:16:43.696217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.489 qpair failed and we were unable to recover it. 00:34:13.489 [2024-07-26 01:16:43.696333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.489 [2024-07-26 01:16:43.696361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.489 qpair failed and we were unable to recover it. 00:34:13.489 [2024-07-26 01:16:43.696473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.489 [2024-07-26 01:16:43.696499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.489 qpair failed and we were unable to recover it. 00:34:13.489 [2024-07-26 01:16:43.696666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.489 [2024-07-26 01:16:43.696692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.489 qpair failed and we were unable to recover it. 00:34:13.489 [2024-07-26 01:16:43.696858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.489 [2024-07-26 01:16:43.696887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.489 qpair failed and we were unable to recover it. 00:34:13.489 [2024-07-26 01:16:43.697053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.489 [2024-07-26 01:16:43.697132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.489 qpair failed and we were unable to recover it. 00:34:13.489 [2024-07-26 01:16:43.697267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.489 [2024-07-26 01:16:43.697299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.489 qpair failed and we were unable to recover it. 00:34:13.489 [2024-07-26 01:16:43.697445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.489 [2024-07-26 01:16:43.697472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.489 qpair failed and we were unable to recover it. 00:34:13.489 [2024-07-26 01:16:43.697594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.489 [2024-07-26 01:16:43.697620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.489 qpair failed and we were unable to recover it. 00:34:13.489 [2024-07-26 01:16:43.697779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.489 [2024-07-26 01:16:43.697804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.489 qpair failed and we were unable to recover it. 00:34:13.489 [2024-07-26 01:16:43.697974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.489 [2024-07-26 01:16:43.698003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.489 qpair failed and we were unable to recover it. 00:34:13.489 [2024-07-26 01:16:43.698148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.489 [2024-07-26 01:16:43.698178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.489 qpair failed and we were unable to recover it. 00:34:13.489 [2024-07-26 01:16:43.698339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.489 [2024-07-26 01:16:43.698381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.489 qpair failed and we were unable to recover it. 00:34:13.489 [2024-07-26 01:16:43.698541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.489 [2024-07-26 01:16:43.698567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.489 qpair failed and we were unable to recover it. 00:34:13.489 [2024-07-26 01:16:43.698700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.489 [2024-07-26 01:16:43.698727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.489 qpair failed and we were unable to recover it. 00:34:13.489 [2024-07-26 01:16:43.698864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.489 [2024-07-26 01:16:43.698890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.489 qpair failed and we were unable to recover it. 00:34:13.489 [2024-07-26 01:16:43.699029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.489 [2024-07-26 01:16:43.699064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.489 qpair failed and we were unable to recover it. 00:34:13.489 [2024-07-26 01:16:43.699197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.489 [2024-07-26 01:16:43.699222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.489 qpair failed and we were unable to recover it. 00:34:13.489 [2024-07-26 01:16:43.699336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.489 [2024-07-26 01:16:43.699361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.489 qpair failed and we were unable to recover it. 00:34:13.489 [2024-07-26 01:16:43.699506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.489 [2024-07-26 01:16:43.699533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.489 qpair failed and we were unable to recover it. 00:34:13.489 [2024-07-26 01:16:43.699694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.489 [2024-07-26 01:16:43.699720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.489 qpair failed and we were unable to recover it. 00:34:13.489 [2024-07-26 01:16:43.699873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.489 [2024-07-26 01:16:43.699902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.489 qpair failed and we were unable to recover it. 00:34:13.489 [2024-07-26 01:16:43.700053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.489 [2024-07-26 01:16:43.700087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.489 qpair failed and we were unable to recover it. 00:34:13.489 [2024-07-26 01:16:43.700220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.489 [2024-07-26 01:16:43.700247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.489 qpair failed and we were unable to recover it. 00:34:13.489 [2024-07-26 01:16:43.700383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.489 [2024-07-26 01:16:43.700410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.489 qpair failed and we were unable to recover it. 00:34:13.489 [2024-07-26 01:16:43.700546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.489 [2024-07-26 01:16:43.700572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.489 qpair failed and we were unable to recover it. 00:34:13.489 [2024-07-26 01:16:43.700768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.489 [2024-07-26 01:16:43.700794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.489 qpair failed and we were unable to recover it. 00:34:13.489 [2024-07-26 01:16:43.700978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.489 [2024-07-26 01:16:43.701008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.489 qpair failed and we were unable to recover it. 00:34:13.489 [2024-07-26 01:16:43.701182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.489 [2024-07-26 01:16:43.701208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.489 qpair failed and we were unable to recover it. 00:34:13.489 [2024-07-26 01:16:43.701347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.489 [2024-07-26 01:16:43.701372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.489 qpair failed and we were unable to recover it. 00:34:13.489 [2024-07-26 01:16:43.701495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.489 [2024-07-26 01:16:43.701540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.489 qpair failed and we were unable to recover it. 00:34:13.490 [2024-07-26 01:16:43.701698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.490 [2024-07-26 01:16:43.701730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.490 qpair failed and we were unable to recover it. 00:34:13.490 [2024-07-26 01:16:43.701884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.490 [2024-07-26 01:16:43.701910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.490 qpair failed and we were unable to recover it. 00:34:13.490 [2024-07-26 01:16:43.702097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.490 [2024-07-26 01:16:43.702127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.490 qpair failed and we were unable to recover it. 00:34:13.490 [2024-07-26 01:16:43.702274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.490 [2024-07-26 01:16:43.702304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.490 qpair failed and we were unable to recover it. 00:34:13.490 [2024-07-26 01:16:43.702490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.490 [2024-07-26 01:16:43.702518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.490 qpair failed and we were unable to recover it. 00:34:13.490 [2024-07-26 01:16:43.702633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.490 [2024-07-26 01:16:43.702659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.490 qpair failed and we were unable to recover it. 00:34:13.490 [2024-07-26 01:16:43.702768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.490 [2024-07-26 01:16:43.702795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.490 qpair failed and we were unable to recover it. 00:34:13.490 [2024-07-26 01:16:43.702931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.490 [2024-07-26 01:16:43.702960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.490 qpair failed and we were unable to recover it. 00:34:13.490 [2024-07-26 01:16:43.703067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.490 [2024-07-26 01:16:43.703093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.490 qpair failed and we were unable to recover it. 00:34:13.490 [2024-07-26 01:16:43.703246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.490 [2024-07-26 01:16:43.703273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.490 qpair failed and we were unable to recover it. 00:34:13.490 [2024-07-26 01:16:43.703453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.490 [2024-07-26 01:16:43.703480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.490 qpair failed and we were unable to recover it. 00:34:13.490 [2024-07-26 01:16:43.703639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.490 [2024-07-26 01:16:43.703666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.490 qpair failed and we were unable to recover it. 00:34:13.490 [2024-07-26 01:16:43.703812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.490 [2024-07-26 01:16:43.703838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.490 qpair failed and we were unable to recover it. 00:34:13.490 [2024-07-26 01:16:43.704716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.490 [2024-07-26 01:16:43.704749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.490 qpair failed and we were unable to recover it. 00:34:13.490 [2024-07-26 01:16:43.704902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.490 [2024-07-26 01:16:43.704936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.490 qpair failed and we were unable to recover it. 00:34:13.490 [2024-07-26 01:16:43.705068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.490 [2024-07-26 01:16:43.705097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.490 qpair failed and we were unable to recover it. 00:34:13.490 [2024-07-26 01:16:43.705240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.490 [2024-07-26 01:16:43.705266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.490 qpair failed and we were unable to recover it. 00:34:13.490 [2024-07-26 01:16:43.705377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.490 [2024-07-26 01:16:43.705404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.490 qpair failed and we were unable to recover it. 00:34:13.490 [2024-07-26 01:16:43.705595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.490 [2024-07-26 01:16:43.705624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.490 qpair failed and we were unable to recover it. 00:34:13.490 [2024-07-26 01:16:43.705746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.490 [2024-07-26 01:16:43.705772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.490 qpair failed and we were unable to recover it. 00:34:13.490 [2024-07-26 01:16:43.705907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.490 [2024-07-26 01:16:43.705934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.490 qpair failed and we were unable to recover it. 00:34:13.490 [2024-07-26 01:16:43.706098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.490 [2024-07-26 01:16:43.706137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.490 qpair failed and we were unable to recover it. 00:34:13.490 [2024-07-26 01:16:43.706321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.490 [2024-07-26 01:16:43.706347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.490 qpair failed and we were unable to recover it. 00:34:13.490 [2024-07-26 01:16:43.706502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.490 [2024-07-26 01:16:43.706531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.490 qpair failed and we were unable to recover it. 00:34:13.490 [2024-07-26 01:16:43.706714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.490 [2024-07-26 01:16:43.706741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.490 qpair failed and we were unable to recover it. 00:34:13.490 [2024-07-26 01:16:43.706879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.490 [2024-07-26 01:16:43.706911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.490 qpair failed and we were unable to recover it. 00:34:13.490 [2024-07-26 01:16:43.707089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.490 [2024-07-26 01:16:43.707127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.490 qpair failed and we were unable to recover it. 00:34:13.490 [2024-07-26 01:16:43.707280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.490 [2024-07-26 01:16:43.707310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.490 qpair failed and we were unable to recover it. 00:34:13.490 [2024-07-26 01:16:43.707444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.490 [2024-07-26 01:16:43.707470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.490 qpair failed and we were unable to recover it. 00:34:13.490 [2024-07-26 01:16:43.707582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.490 [2024-07-26 01:16:43.707610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.490 qpair failed and we were unable to recover it. 00:34:13.490 [2024-07-26 01:16:43.707779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.490 [2024-07-26 01:16:43.707808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.490 qpair failed and we were unable to recover it. 00:34:13.490 [2024-07-26 01:16:43.707966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.490 [2024-07-26 01:16:43.707993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.490 qpair failed and we were unable to recover it. 00:34:13.490 [2024-07-26 01:16:43.708138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.491 [2024-07-26 01:16:43.708165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.491 qpair failed and we were unable to recover it. 00:34:13.491 [2024-07-26 01:16:43.708308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.491 [2024-07-26 01:16:43.708334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.491 qpair failed and we were unable to recover it. 00:34:13.491 [2024-07-26 01:16:43.708507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.491 [2024-07-26 01:16:43.708532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.491 qpair failed and we were unable to recover it. 00:34:13.491 [2024-07-26 01:16:43.708670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.491 [2024-07-26 01:16:43.708696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.491 qpair failed and we were unable to recover it. 00:34:13.491 [2024-07-26 01:16:43.708821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.491 [2024-07-26 01:16:43.708847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.491 qpair failed and we were unable to recover it. 00:34:13.491 [2024-07-26 01:16:43.708961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.491 [2024-07-26 01:16:43.708989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.491 qpair failed and we were unable to recover it. 00:34:13.491 [2024-07-26 01:16:43.709156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.491 [2024-07-26 01:16:43.709186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.491 qpair failed and we were unable to recover it. 00:34:13.491 [2024-07-26 01:16:43.709346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.491 [2024-07-26 01:16:43.709378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.491 qpair failed and we were unable to recover it. 00:34:13.491 [2024-07-26 01:16:43.709540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.491 [2024-07-26 01:16:43.709569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.491 qpair failed and we were unable to recover it. 00:34:13.491 [2024-07-26 01:16:43.709695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.491 [2024-07-26 01:16:43.709721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.491 qpair failed and we were unable to recover it. 00:34:13.491 [2024-07-26 01:16:43.709877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.491 [2024-07-26 01:16:43.709919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.491 qpair failed and we were unable to recover it. 00:34:13.491 [2024-07-26 01:16:43.710083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.491 [2024-07-26 01:16:43.710117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.491 qpair failed and we were unable to recover it. 00:34:13.491 [2024-07-26 01:16:43.710232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.491 [2024-07-26 01:16:43.710258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.491 qpair failed and we were unable to recover it. 00:34:13.491 [2024-07-26 01:16:43.710385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.491 [2024-07-26 01:16:43.710411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.491 qpair failed and we were unable to recover it. 00:34:13.491 [2024-07-26 01:16:43.710575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.491 [2024-07-26 01:16:43.710601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.491 qpair failed and we were unable to recover it. 00:34:13.491 [2024-07-26 01:16:43.710748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.491 [2024-07-26 01:16:43.710777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.491 qpair failed and we were unable to recover it. 00:34:13.491 [2024-07-26 01:16:43.710901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.491 [2024-07-26 01:16:43.710930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.491 qpair failed and we were unable to recover it. 00:34:13.491 [2024-07-26 01:16:43.711101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.491 [2024-07-26 01:16:43.711128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.491 qpair failed and we were unable to recover it. 00:34:13.491 [2024-07-26 01:16:43.711281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.491 [2024-07-26 01:16:43.711311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.491 qpair failed and we were unable to recover it. 00:34:13.491 [2024-07-26 01:16:43.711453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.491 [2024-07-26 01:16:43.711482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.491 qpair failed and we were unable to recover it. 00:34:13.491 [2024-07-26 01:16:43.711643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.491 [2024-07-26 01:16:43.711669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.491 qpair failed and we were unable to recover it. 00:34:13.491 [2024-07-26 01:16:43.711806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.491 [2024-07-26 01:16:43.711850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.491 qpair failed and we were unable to recover it. 00:34:13.491 [2024-07-26 01:16:43.712030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.491 [2024-07-26 01:16:43.712066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.491 qpair failed and we were unable to recover it. 00:34:13.491 [2024-07-26 01:16:43.712207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.491 [2024-07-26 01:16:43.712233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.491 qpair failed and we were unable to recover it. 00:34:13.491 [2024-07-26 01:16:43.712356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.491 [2024-07-26 01:16:43.712384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.491 qpair failed and we were unable to recover it. 00:34:13.491 [2024-07-26 01:16:43.712563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.491 [2024-07-26 01:16:43.712592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.491 qpair failed and we were unable to recover it. 00:34:13.491 [2024-07-26 01:16:43.712745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.491 [2024-07-26 01:16:43.712770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.491 qpair failed and we were unable to recover it. 00:34:13.491 [2024-07-26 01:16:43.712910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.491 [2024-07-26 01:16:43.712937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.491 qpair failed and we were unable to recover it. 00:34:13.491 [2024-07-26 01:16:43.713117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.491 [2024-07-26 01:16:43.713145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.491 qpair failed and we were unable to recover it. 00:34:13.491 [2024-07-26 01:16:43.713307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.491 [2024-07-26 01:16:43.713335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.491 qpair failed and we were unable to recover it. 00:34:13.491 [2024-07-26 01:16:43.713465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.491 [2024-07-26 01:16:43.713491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.491 qpair failed and we were unable to recover it. 00:34:13.491 [2024-07-26 01:16:43.713656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.491 [2024-07-26 01:16:43.713700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.491 qpair failed and we were unable to recover it. 00:34:13.491 [2024-07-26 01:16:43.713885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.491 [2024-07-26 01:16:43.713912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.491 qpair failed and we were unable to recover it. 00:34:13.491 [2024-07-26 01:16:43.714043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.491 [2024-07-26 01:16:43.714078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.491 qpair failed and we were unable to recover it. 00:34:13.491 [2024-07-26 01:16:43.714227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.491 [2024-07-26 01:16:43.714271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.491 qpair failed and we were unable to recover it. 00:34:13.491 [2024-07-26 01:16:43.714438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.491 [2024-07-26 01:16:43.714464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.491 qpair failed and we were unable to recover it. 00:34:13.491 [2024-07-26 01:16:43.714580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.491 [2024-07-26 01:16:43.714606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.491 qpair failed and we were unable to recover it. 00:34:13.491 [2024-07-26 01:16:43.714774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.492 [2024-07-26 01:16:43.714800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.492 qpair failed and we were unable to recover it. 00:34:13.492 [2024-07-26 01:16:43.714935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.492 [2024-07-26 01:16:43.714961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.492 qpair failed and we were unable to recover it. 00:34:13.492 [2024-07-26 01:16:43.715120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.492 [2024-07-26 01:16:43.715148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.492 qpair failed and we were unable to recover it. 00:34:13.492 [2024-07-26 01:16:43.715272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.492 [2024-07-26 01:16:43.715306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.492 qpair failed and we were unable to recover it. 00:34:13.492 [2024-07-26 01:16:43.715475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.492 [2024-07-26 01:16:43.715503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.492 qpair failed and we were unable to recover it. 00:34:13.492 [2024-07-26 01:16:43.715613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.492 [2024-07-26 01:16:43.715638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.492 qpair failed and we were unable to recover it. 00:34:13.492 [2024-07-26 01:16:43.715776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.492 [2024-07-26 01:16:43.715822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.492 qpair failed and we were unable to recover it. 00:34:13.492 [2024-07-26 01:16:43.715979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.492 [2024-07-26 01:16:43.716005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.492 qpair failed and we were unable to recover it. 00:34:13.492 [2024-07-26 01:16:43.716112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.492 [2024-07-26 01:16:43.716140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.492 qpair failed and we were unable to recover it. 00:34:13.492 [2024-07-26 01:16:43.716280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.492 [2024-07-26 01:16:43.716306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.492 qpair failed and we were unable to recover it. 00:34:13.492 [2024-07-26 01:16:43.716478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.492 [2024-07-26 01:16:43.716503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.492 qpair failed and we were unable to recover it. 00:34:13.492 [2024-07-26 01:16:43.716650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.492 [2024-07-26 01:16:43.716677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.492 qpair failed and we were unable to recover it. 00:34:13.492 [2024-07-26 01:16:43.716807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.492 [2024-07-26 01:16:43.716834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.492 qpair failed and we were unable to recover it. 00:34:13.492 [2024-07-26 01:16:43.716990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.492 [2024-07-26 01:16:43.717016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.492 qpair failed and we were unable to recover it. 00:34:13.492 [2024-07-26 01:16:43.717185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.492 [2024-07-26 01:16:43.717212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.492 qpair failed and we were unable to recover it. 00:34:13.492 [2024-07-26 01:16:43.717330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.492 [2024-07-26 01:16:43.717376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.492 qpair failed and we were unable to recover it. 00:34:13.492 [2024-07-26 01:16:43.717508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.492 [2024-07-26 01:16:43.717535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.492 qpair failed and we were unable to recover it. 00:34:13.492 [2024-07-26 01:16:43.717671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.492 [2024-07-26 01:16:43.717698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.492 qpair failed and we were unable to recover it. 00:34:13.492 [2024-07-26 01:16:43.717871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.492 [2024-07-26 01:16:43.717897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.492 qpair failed and we were unable to recover it. 00:34:13.492 [2024-07-26 01:16:43.718031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.492 [2024-07-26 01:16:43.718056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.492 qpair failed and we were unable to recover it. 00:34:13.492 [2024-07-26 01:16:43.718211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.492 [2024-07-26 01:16:43.718237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.492 qpair failed and we were unable to recover it. 00:34:13.492 [2024-07-26 01:16:43.718390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.492 [2024-07-26 01:16:43.718416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.492 qpair failed and we were unable to recover it. 00:34:13.492 [2024-07-26 01:16:43.718555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.492 [2024-07-26 01:16:43.718580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.492 qpair failed and we were unable to recover it. 00:34:13.492 [2024-07-26 01:16:43.718726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.492 [2024-07-26 01:16:43.718753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.492 qpair failed and we were unable to recover it. 00:34:13.492 [2024-07-26 01:16:43.718867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.492 [2024-07-26 01:16:43.718912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.492 qpair failed and we were unable to recover it. 00:34:13.492 [2024-07-26 01:16:43.719081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.492 [2024-07-26 01:16:43.719117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.492 qpair failed and we were unable to recover it. 00:34:13.492 [2024-07-26 01:16:43.719250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.492 [2024-07-26 01:16:43.719292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.492 qpair failed and we were unable to recover it. 00:34:13.492 [2024-07-26 01:16:43.719478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.492 [2024-07-26 01:16:43.719506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.492 qpair failed and we were unable to recover it. 00:34:13.492 [2024-07-26 01:16:43.719635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.492 [2024-07-26 01:16:43.719661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.492 qpair failed and we were unable to recover it. 00:34:13.492 [2024-07-26 01:16:43.719789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.492 [2024-07-26 01:16:43.719814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.492 qpair failed and we were unable to recover it. 00:34:13.492 [2024-07-26 01:16:43.719945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.492 [2024-07-26 01:16:43.719970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.492 qpair failed and we were unable to recover it. 00:34:13.492 [2024-07-26 01:16:43.720109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.492 [2024-07-26 01:16:43.720134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.492 qpair failed and we were unable to recover it. 00:34:13.492 [2024-07-26 01:16:43.720289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.492 [2024-07-26 01:16:43.720314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.492 qpair failed and we were unable to recover it. 00:34:13.492 [2024-07-26 01:16:43.720433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.492 [2024-07-26 01:16:43.720458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.492 qpair failed and we were unable to recover it. 00:34:13.492 [2024-07-26 01:16:43.720592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.492 [2024-07-26 01:16:43.720617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.492 qpair failed and we were unable to recover it. 00:34:13.492 [2024-07-26 01:16:43.720726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.492 [2024-07-26 01:16:43.720752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.492 qpair failed and we were unable to recover it. 00:34:13.492 [2024-07-26 01:16:43.720917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.492 [2024-07-26 01:16:43.720942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.492 qpair failed and we were unable to recover it. 00:34:13.492 [2024-07-26 01:16:43.721046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.492 [2024-07-26 01:16:43.721078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.492 qpair failed and we were unable to recover it. 00:34:13.493 [2024-07-26 01:16:43.721188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.493 [2024-07-26 01:16:43.721214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.493 qpair failed and we were unable to recover it. 00:34:13.493 [2024-07-26 01:16:43.721354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.493 [2024-07-26 01:16:43.721380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.493 qpair failed and we were unable to recover it. 00:34:13.493 [2024-07-26 01:16:43.721490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.493 [2024-07-26 01:16:43.721513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.493 qpair failed and we were unable to recover it. 00:34:13.493 [2024-07-26 01:16:43.721642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.493 [2024-07-26 01:16:43.721667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.493 qpair failed and we were unable to recover it. 00:34:13.493 [2024-07-26 01:16:43.721802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.493 [2024-07-26 01:16:43.721828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.493 qpair failed and we were unable to recover it. 00:34:13.493 [2024-07-26 01:16:43.721930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.493 [2024-07-26 01:16:43.721955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.493 qpair failed and we were unable to recover it. 00:34:13.493 [2024-07-26 01:16:43.722068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.493 [2024-07-26 01:16:43.722094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.493 qpair failed and we were unable to recover it. 00:34:13.493 [2024-07-26 01:16:43.722203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.493 [2024-07-26 01:16:43.722228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.493 qpair failed and we were unable to recover it. 00:34:13.493 [2024-07-26 01:16:43.722370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.493 [2024-07-26 01:16:43.722394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.493 qpair failed and we were unable to recover it. 00:34:13.493 [2024-07-26 01:16:43.722518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.493 [2024-07-26 01:16:43.722543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.493 qpair failed and we were unable to recover it. 00:34:13.493 [2024-07-26 01:16:43.722649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.493 [2024-07-26 01:16:43.722675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.493 qpair failed and we were unable to recover it. 00:34:13.493 [2024-07-26 01:16:43.722783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.493 [2024-07-26 01:16:43.722808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.493 qpair failed and we were unable to recover it. 00:34:13.493 [2024-07-26 01:16:43.722915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.493 [2024-07-26 01:16:43.722943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.493 qpair failed and we were unable to recover it. 00:34:13.493 [2024-07-26 01:16:43.723120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.493 [2024-07-26 01:16:43.723147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.493 qpair failed and we were unable to recover it. 00:34:13.493 [2024-07-26 01:16:43.723288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.493 [2024-07-26 01:16:43.723314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.493 qpair failed and we were unable to recover it. 00:34:13.493 [2024-07-26 01:16:43.723425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.493 [2024-07-26 01:16:43.723451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.493 qpair failed and we were unable to recover it. 00:34:13.493 [2024-07-26 01:16:43.723590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.493 [2024-07-26 01:16:43.723616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.493 qpair failed and we were unable to recover it. 00:34:13.493 [2024-07-26 01:16:43.723726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.493 [2024-07-26 01:16:43.723752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.493 qpair failed and we were unable to recover it. 00:34:13.493 [2024-07-26 01:16:43.723861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.493 [2024-07-26 01:16:43.723888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.493 qpair failed and we were unable to recover it. 00:34:13.493 [2024-07-26 01:16:43.724028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.493 [2024-07-26 01:16:43.724055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.493 qpair failed and we were unable to recover it. 00:34:13.493 [2024-07-26 01:16:43.724215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.493 [2024-07-26 01:16:43.724244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.493 qpair failed and we were unable to recover it. 00:34:13.493 [2024-07-26 01:16:43.724393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.493 [2024-07-26 01:16:43.724421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.493 qpair failed and we were unable to recover it. 00:34:13.493 [2024-07-26 01:16:43.724570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.493 [2024-07-26 01:16:43.724596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.493 qpair failed and we were unable to recover it. 00:34:13.493 [2024-07-26 01:16:43.724730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.493 [2024-07-26 01:16:43.724755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.493 qpair failed and we were unable to recover it. 00:34:13.493 [2024-07-26 01:16:43.724897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.493 [2024-07-26 01:16:43.724924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.493 qpair failed and we were unable to recover it. 00:34:13.493 [2024-07-26 01:16:43.725078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.493 [2024-07-26 01:16:43.725104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.493 qpair failed and we were unable to recover it. 00:34:13.493 [2024-07-26 01:16:43.725239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.493 [2024-07-26 01:16:43.725265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.493 qpair failed and we were unable to recover it. 00:34:13.493 [2024-07-26 01:16:43.725383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.493 [2024-07-26 01:16:43.725410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.493 qpair failed and we were unable to recover it. 00:34:13.493 [2024-07-26 01:16:43.725516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.493 [2024-07-26 01:16:43.725541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.493 qpair failed and we were unable to recover it. 00:34:13.493 [2024-07-26 01:16:43.725641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.493 [2024-07-26 01:16:43.725666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.493 qpair failed and we were unable to recover it. 00:34:13.493 [2024-07-26 01:16:43.725775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.493 [2024-07-26 01:16:43.725800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.493 qpair failed and we were unable to recover it. 00:34:13.493 [2024-07-26 01:16:43.725900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.493 [2024-07-26 01:16:43.725925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.493 qpair failed and we were unable to recover it. 00:34:13.493 [2024-07-26 01:16:43.726066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.493 [2024-07-26 01:16:43.726091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.493 qpair failed and we were unable to recover it. 00:34:13.493 [2024-07-26 01:16:43.726223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.493 [2024-07-26 01:16:43.726248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.493 qpair failed and we were unable to recover it. 00:34:13.493 [2024-07-26 01:16:43.726366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.493 [2024-07-26 01:16:43.726391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.493 qpair failed and we were unable to recover it. 00:34:13.493 [2024-07-26 01:16:43.726493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.493 [2024-07-26 01:16:43.726517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.493 qpair failed and we were unable to recover it. 00:34:13.493 [2024-07-26 01:16:43.726655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.493 [2024-07-26 01:16:43.726680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.493 qpair failed and we were unable to recover it. 00:34:13.493 [2024-07-26 01:16:43.726818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.493 [2024-07-26 01:16:43.726843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.493 qpair failed and we were unable to recover it. 00:34:13.494 [2024-07-26 01:16:43.726990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.494 [2024-07-26 01:16:43.727030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.494 qpair failed and we were unable to recover it. 00:34:13.494 [2024-07-26 01:16:43.727196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.494 [2024-07-26 01:16:43.727225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.494 qpair failed and we were unable to recover it. 00:34:13.494 [2024-07-26 01:16:43.727407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.494 [2024-07-26 01:16:43.727434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.494 qpair failed and we were unable to recover it. 00:34:13.494 [2024-07-26 01:16:43.727542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.494 [2024-07-26 01:16:43.727568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.494 qpair failed and we were unable to recover it. 00:34:13.494 [2024-07-26 01:16:43.727727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.494 [2024-07-26 01:16:43.727753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.494 qpair failed and we were unable to recover it. 00:34:13.494 [2024-07-26 01:16:43.727863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.494 [2024-07-26 01:16:43.727890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.494 qpair failed and we were unable to recover it. 00:34:13.494 [2024-07-26 01:16:43.728030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.494 [2024-07-26 01:16:43.728056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.494 qpair failed and we were unable to recover it. 00:34:13.494 [2024-07-26 01:16:43.728205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.494 [2024-07-26 01:16:43.728231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.494 qpair failed and we were unable to recover it. 00:34:13.494 [2024-07-26 01:16:43.728418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.494 [2024-07-26 01:16:43.728445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.494 qpair failed and we were unable to recover it. 00:34:13.494 [2024-07-26 01:16:43.728581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.494 [2024-07-26 01:16:43.728607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.494 qpair failed and we were unable to recover it. 00:34:13.494 [2024-07-26 01:16:43.728741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.494 [2024-07-26 01:16:43.728767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.494 qpair failed and we were unable to recover it. 00:34:13.494 [2024-07-26 01:16:43.728912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.494 [2024-07-26 01:16:43.728939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.494 qpair failed and we were unable to recover it. 00:34:13.494 [2024-07-26 01:16:43.729076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.494 [2024-07-26 01:16:43.729102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.494 qpair failed and we were unable to recover it. 00:34:13.494 [2024-07-26 01:16:43.729235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.494 [2024-07-26 01:16:43.729261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.494 qpair failed and we were unable to recover it. 00:34:13.494 [2024-07-26 01:16:43.729409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.494 [2024-07-26 01:16:43.729439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.494 qpair failed and we were unable to recover it. 00:34:13.494 [2024-07-26 01:16:43.729576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.494 [2024-07-26 01:16:43.729602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.494 qpair failed and we were unable to recover it. 00:34:13.494 [2024-07-26 01:16:43.729738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.494 [2024-07-26 01:16:43.729765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.494 qpair failed and we were unable to recover it. 00:34:13.494 [2024-07-26 01:16:43.729911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.494 [2024-07-26 01:16:43.729937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.494 qpair failed and we were unable to recover it. 00:34:13.494 [2024-07-26 01:16:43.730046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.494 [2024-07-26 01:16:43.730101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.494 qpair failed and we were unable to recover it. 00:34:13.494 [2024-07-26 01:16:43.730208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.494 [2024-07-26 01:16:43.730234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.494 qpair failed and we were unable to recover it. 00:34:13.494 [2024-07-26 01:16:43.730377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.494 [2024-07-26 01:16:43.730405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.494 qpair failed and we were unable to recover it. 00:34:13.494 [2024-07-26 01:16:43.730535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.494 [2024-07-26 01:16:43.730563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.494 qpair failed and we were unable to recover it. 00:34:13.494 [2024-07-26 01:16:43.730727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.494 [2024-07-26 01:16:43.730753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.494 qpair failed and we were unable to recover it. 00:34:13.494 [2024-07-26 01:16:43.730928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.494 [2024-07-26 01:16:43.730955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.494 qpair failed and we were unable to recover it. 00:34:13.494 [2024-07-26 01:16:43.731098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.494 [2024-07-26 01:16:43.731151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.494 qpair failed and we were unable to recover it. 00:34:13.494 [2024-07-26 01:16:43.731287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.494 [2024-07-26 01:16:43.731312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.494 qpair failed and we were unable to recover it. 00:34:13.494 [2024-07-26 01:16:43.731458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.494 [2024-07-26 01:16:43.731484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.494 qpair failed and we were unable to recover it. 00:34:13.494 [2024-07-26 01:16:43.731594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.494 [2024-07-26 01:16:43.731620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.494 qpair failed and we were unable to recover it. 00:34:13.494 [2024-07-26 01:16:43.731725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.494 [2024-07-26 01:16:43.731750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.494 qpair failed and we were unable to recover it. 00:34:13.494 [2024-07-26 01:16:43.731910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.494 [2024-07-26 01:16:43.731936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.494 qpair failed and we were unable to recover it. 00:34:13.494 [2024-07-26 01:16:43.732054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.494 [2024-07-26 01:16:43.732086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.494 qpair failed and we were unable to recover it. 00:34:13.494 [2024-07-26 01:16:43.732256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.494 [2024-07-26 01:16:43.732282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.494 qpair failed and we were unable to recover it. 00:34:13.494 [2024-07-26 01:16:43.732408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.494 [2024-07-26 01:16:43.732434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.494 qpair failed and we were unable to recover it. 00:34:13.494 [2024-07-26 01:16:43.732543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.494 [2024-07-26 01:16:43.732568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.494 qpair failed and we were unable to recover it. 00:34:13.494 [2024-07-26 01:16:43.732695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.494 [2024-07-26 01:16:43.732721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.494 qpair failed and we were unable to recover it. 00:34:13.494 [2024-07-26 01:16:43.732885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.494 [2024-07-26 01:16:43.732910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.494 qpair failed and we were unable to recover it. 00:34:13.494 [2024-07-26 01:16:43.733089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.494 [2024-07-26 01:16:43.733134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.494 qpair failed and we were unable to recover it. 00:34:13.494 [2024-07-26 01:16:43.733272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.495 [2024-07-26 01:16:43.733298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.495 qpair failed and we were unable to recover it. 00:34:13.495 [2024-07-26 01:16:43.733459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.495 [2024-07-26 01:16:43.733485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.495 qpair failed and we were unable to recover it. 00:34:13.495 [2024-07-26 01:16:43.733595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.495 [2024-07-26 01:16:43.733620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.495 qpair failed and we were unable to recover it. 00:34:13.495 [2024-07-26 01:16:43.733716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.495 [2024-07-26 01:16:43.733741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.495 qpair failed and we were unable to recover it. 00:34:13.495 [2024-07-26 01:16:43.733903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.495 [2024-07-26 01:16:43.733933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.495 qpair failed and we were unable to recover it. 00:34:13.495 [2024-07-26 01:16:43.734082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.495 [2024-07-26 01:16:43.734108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.495 qpair failed and we were unable to recover it. 00:34:13.495 [2024-07-26 01:16:43.734241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.495 [2024-07-26 01:16:43.734266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.495 qpair failed and we were unable to recover it. 00:34:13.495 [2024-07-26 01:16:43.734409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.495 [2024-07-26 01:16:43.734434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.495 qpair failed and we were unable to recover it. 00:34:13.495 [2024-07-26 01:16:43.734567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.495 [2024-07-26 01:16:43.734592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.495 qpair failed and we were unable to recover it. 00:34:13.495 [2024-07-26 01:16:43.734712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.495 [2024-07-26 01:16:43.734752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.495 qpair failed and we were unable to recover it. 00:34:13.495 [2024-07-26 01:16:43.734926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.495 [2024-07-26 01:16:43.734954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.495 qpair failed and we were unable to recover it. 00:34:13.495 [2024-07-26 01:16:43.735132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.495 [2024-07-26 01:16:43.735159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.495 qpair failed and we were unable to recover it. 00:34:13.495 [2024-07-26 01:16:43.735305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.495 [2024-07-26 01:16:43.735331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.495 qpair failed and we were unable to recover it. 00:34:13.495 [2024-07-26 01:16:43.735450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.495 [2024-07-26 01:16:43.735476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.495 qpair failed and we were unable to recover it. 00:34:13.495 [2024-07-26 01:16:43.735637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.495 [2024-07-26 01:16:43.735663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.495 qpair failed and we were unable to recover it. 00:34:13.495 [2024-07-26 01:16:43.735766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.495 [2024-07-26 01:16:43.735792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.495 qpair failed and we were unable to recover it. 00:34:13.495 [2024-07-26 01:16:43.735928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.495 [2024-07-26 01:16:43.735954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.495 qpair failed and we were unable to recover it. 00:34:13.495 [2024-07-26 01:16:43.736095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.495 [2024-07-26 01:16:43.736122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.495 qpair failed and we were unable to recover it. 00:34:13.495 [2024-07-26 01:16:43.736271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.495 [2024-07-26 01:16:43.736297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.495 qpair failed and we were unable to recover it. 00:34:13.495 [2024-07-26 01:16:43.736468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.495 [2024-07-26 01:16:43.736495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.495 qpair failed and we were unable to recover it. 00:34:13.495 [2024-07-26 01:16:43.736629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.495 [2024-07-26 01:16:43.736654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.495 qpair failed and we were unable to recover it. 00:34:13.495 [2024-07-26 01:16:43.736790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.495 [2024-07-26 01:16:43.736816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.495 qpair failed and we were unable to recover it. 00:34:13.495 [2024-07-26 01:16:43.736955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.495 [2024-07-26 01:16:43.736982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.495 qpair failed and we were unable to recover it. 00:34:13.495 [2024-07-26 01:16:43.737130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.495 [2024-07-26 01:16:43.737157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.495 qpair failed and we were unable to recover it. 00:34:13.495 [2024-07-26 01:16:43.737331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.495 [2024-07-26 01:16:43.737358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.495 qpair failed and we were unable to recover it. 00:34:13.495 [2024-07-26 01:16:43.737499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.495 [2024-07-26 01:16:43.737526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.495 qpair failed and we were unable to recover it. 00:34:13.495 [2024-07-26 01:16:43.737663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.495 [2024-07-26 01:16:43.737688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.495 qpair failed and we were unable to recover it. 00:34:13.495 [2024-07-26 01:16:43.737798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.495 [2024-07-26 01:16:43.737825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.495 qpair failed and we were unable to recover it. 00:34:13.495 [2024-07-26 01:16:43.737971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.495 [2024-07-26 01:16:43.737996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.495 qpair failed and we were unable to recover it. 00:34:13.495 [2024-07-26 01:16:43.738140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.495 [2024-07-26 01:16:43.738167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.495 qpair failed and we were unable to recover it. 00:34:13.495 [2024-07-26 01:16:43.738309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.495 [2024-07-26 01:16:43.738336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.495 qpair failed and we were unable to recover it. 00:34:13.495 [2024-07-26 01:16:43.738516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.495 [2024-07-26 01:16:43.738542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.495 qpair failed and we were unable to recover it. 00:34:13.495 [2024-07-26 01:16:43.738671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.495 [2024-07-26 01:16:43.738697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.495 qpair failed and we were unable to recover it. 00:34:13.495 [2024-07-26 01:16:43.738828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.496 [2024-07-26 01:16:43.738853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.496 qpair failed and we were unable to recover it. 00:34:13.496 [2024-07-26 01:16:43.738989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.496 [2024-07-26 01:16:43.739015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.496 qpair failed and we were unable to recover it. 00:34:13.496 [2024-07-26 01:16:43.739164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.496 [2024-07-26 01:16:43.739190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.496 qpair failed and we were unable to recover it. 00:34:13.496 [2024-07-26 01:16:43.739328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.496 [2024-07-26 01:16:43.739354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.496 qpair failed and we were unable to recover it. 00:34:13.496 [2024-07-26 01:16:43.739464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.496 [2024-07-26 01:16:43.739490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.496 qpair failed and we were unable to recover it. 00:34:13.496 [2024-07-26 01:16:43.739623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.496 [2024-07-26 01:16:43.739648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.496 qpair failed and we were unable to recover it. 00:34:13.496 [2024-07-26 01:16:43.739788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.496 [2024-07-26 01:16:43.739814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.496 qpair failed and we were unable to recover it. 00:34:13.496 [2024-07-26 01:16:43.739921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.496 [2024-07-26 01:16:43.739948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.496 qpair failed and we were unable to recover it. 00:34:13.496 [2024-07-26 01:16:43.740055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.496 [2024-07-26 01:16:43.740089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.496 qpair failed and we were unable to recover it. 00:34:13.496 [2024-07-26 01:16:43.740235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.496 [2024-07-26 01:16:43.740262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.496 qpair failed and we were unable to recover it. 00:34:13.496 [2024-07-26 01:16:43.740404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.496 [2024-07-26 01:16:43.740431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.496 qpair failed and we were unable to recover it. 00:34:13.496 [2024-07-26 01:16:43.740566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.496 [2024-07-26 01:16:43.740592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.496 qpair failed and we were unable to recover it. 00:34:13.496 [2024-07-26 01:16:43.740759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.496 [2024-07-26 01:16:43.740786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.496 qpair failed and we were unable to recover it. 00:34:13.496 [2024-07-26 01:16:43.740932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.496 [2024-07-26 01:16:43.740958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.496 qpair failed and we were unable to recover it. 00:34:13.496 [2024-07-26 01:16:43.741072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.496 [2024-07-26 01:16:43.741098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.496 qpair failed and we were unable to recover it. 00:34:13.496 [2024-07-26 01:16:43.741208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.496 [2024-07-26 01:16:43.741233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.496 qpair failed and we were unable to recover it. 00:34:13.496 [2024-07-26 01:16:43.741367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.496 [2024-07-26 01:16:43.741392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.496 qpair failed and we were unable to recover it. 00:34:13.496 [2024-07-26 01:16:43.741530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.496 [2024-07-26 01:16:43.741554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.496 qpair failed and we were unable to recover it. 00:34:13.496 [2024-07-26 01:16:43.741662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.496 [2024-07-26 01:16:43.741688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.496 qpair failed and we were unable to recover it. 00:34:13.496 [2024-07-26 01:16:43.741800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.496 [2024-07-26 01:16:43.741826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.496 qpair failed and we were unable to recover it. 00:34:13.496 [2024-07-26 01:16:43.741961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.496 [2024-07-26 01:16:43.741986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.496 qpair failed and we were unable to recover it. 00:34:13.496 [2024-07-26 01:16:43.742126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.496 [2024-07-26 01:16:43.742153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.496 qpair failed and we were unable to recover it. 00:34:13.496 [2024-07-26 01:16:43.742287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.496 [2024-07-26 01:16:43.742313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.496 qpair failed and we were unable to recover it. 00:34:13.496 [2024-07-26 01:16:43.742447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.496 [2024-07-26 01:16:43.742473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.496 qpair failed and we were unable to recover it. 00:34:13.496 [2024-07-26 01:16:43.742605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.496 [2024-07-26 01:16:43.742630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.496 qpair failed and we were unable to recover it. 00:34:13.496 [2024-07-26 01:16:43.742769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.496 [2024-07-26 01:16:43.742795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.496 qpair failed and we were unable to recover it. 00:34:13.496 [2024-07-26 01:16:43.742928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.496 [2024-07-26 01:16:43.742954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.496 qpair failed and we were unable to recover it. 00:34:13.496 [2024-07-26 01:16:43.743072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.496 [2024-07-26 01:16:43.743097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.496 qpair failed and we were unable to recover it. 00:34:13.496 [2024-07-26 01:16:43.743204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.496 [2024-07-26 01:16:43.743231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.496 qpair failed and we were unable to recover it. 00:34:13.496 [2024-07-26 01:16:43.743362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.496 [2024-07-26 01:16:43.743388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.496 qpair failed and we were unable to recover it. 00:34:13.496 [2024-07-26 01:16:43.743526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.496 [2024-07-26 01:16:43.743554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.496 qpair failed and we were unable to recover it. 00:34:13.496 [2024-07-26 01:16:43.743719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.496 [2024-07-26 01:16:43.743745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.496 qpair failed and we were unable to recover it. 00:34:13.496 [2024-07-26 01:16:43.743877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.496 [2024-07-26 01:16:43.743902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.496 qpair failed and we were unable to recover it. 00:34:13.496 [2024-07-26 01:16:43.744007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.496 [2024-07-26 01:16:43.744032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.496 qpair failed and we were unable to recover it. 00:34:13.496 [2024-07-26 01:16:43.744164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.496 [2024-07-26 01:16:43.744190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.496 qpair failed and we were unable to recover it. 00:34:13.496 [2024-07-26 01:16:43.744294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.496 [2024-07-26 01:16:43.744319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.496 qpair failed and we were unable to recover it. 00:34:13.496 [2024-07-26 01:16:43.744455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.496 [2024-07-26 01:16:43.744480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.496 qpair failed and we were unable to recover it. 00:34:13.496 [2024-07-26 01:16:43.744590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.496 [2024-07-26 01:16:43.744615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.497 qpair failed and we were unable to recover it. 00:34:13.497 [2024-07-26 01:16:43.744751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.497 [2024-07-26 01:16:43.744775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.497 qpair failed and we were unable to recover it. 00:34:13.497 [2024-07-26 01:16:43.744913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.497 [2024-07-26 01:16:43.744938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.497 qpair failed and we were unable to recover it. 00:34:13.497 [2024-07-26 01:16:43.745050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.497 [2024-07-26 01:16:43.745082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.497 qpair failed and we were unable to recover it. 00:34:13.497 [2024-07-26 01:16:43.745221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.497 [2024-07-26 01:16:43.745247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.497 qpair failed and we were unable to recover it. 00:34:13.497 [2024-07-26 01:16:43.745387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.497 [2024-07-26 01:16:43.745412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.497 qpair failed and we were unable to recover it. 00:34:13.497 [2024-07-26 01:16:43.745541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.497 [2024-07-26 01:16:43.745566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.497 qpair failed and we were unable to recover it. 00:34:13.497 [2024-07-26 01:16:43.745695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.497 [2024-07-26 01:16:43.745720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.497 qpair failed and we were unable to recover it. 00:34:13.497 [2024-07-26 01:16:43.745832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.497 [2024-07-26 01:16:43.745857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.497 qpair failed and we were unable to recover it. 00:34:13.497 [2024-07-26 01:16:43.745964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.497 [2024-07-26 01:16:43.745989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.497 qpair failed and we were unable to recover it. 00:34:13.497 [2024-07-26 01:16:43.746119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.497 [2024-07-26 01:16:43.746145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.497 qpair failed and we were unable to recover it. 00:34:13.497 [2024-07-26 01:16:43.746284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.497 [2024-07-26 01:16:43.746309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.497 qpair failed and we were unable to recover it. 00:34:13.497 [2024-07-26 01:16:43.746444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.497 [2024-07-26 01:16:43.746469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.497 qpair failed and we were unable to recover it. 00:34:13.497 [2024-07-26 01:16:43.746603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.497 [2024-07-26 01:16:43.746628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.497 qpair failed and we were unable to recover it. 00:34:13.497 [2024-07-26 01:16:43.746732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.497 [2024-07-26 01:16:43.746757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.497 qpair failed and we were unable to recover it. 00:34:13.497 [2024-07-26 01:16:43.746880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.497 [2024-07-26 01:16:43.746907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.497 qpair failed and we were unable to recover it. 00:34:13.497 [2024-07-26 01:16:43.747047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.497 [2024-07-26 01:16:43.747083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.497 qpair failed and we were unable to recover it. 00:34:13.497 [2024-07-26 01:16:43.747225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.497 [2024-07-26 01:16:43.747250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.497 qpair failed and we were unable to recover it. 00:34:13.497 [2024-07-26 01:16:43.747358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.497 [2024-07-26 01:16:43.747383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.497 qpair failed and we were unable to recover it. 00:34:13.497 [2024-07-26 01:16:43.747485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.497 [2024-07-26 01:16:43.747510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.497 qpair failed and we were unable to recover it. 00:34:13.497 [2024-07-26 01:16:43.747640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.497 [2024-07-26 01:16:43.747665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.497 qpair failed and we were unable to recover it. 00:34:13.497 [2024-07-26 01:16:43.747801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.497 [2024-07-26 01:16:43.747826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.497 qpair failed and we were unable to recover it. 00:34:13.497 [2024-07-26 01:16:43.747922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.497 [2024-07-26 01:16:43.747948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.497 qpair failed and we were unable to recover it. 00:34:13.497 [2024-07-26 01:16:43.748079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.497 [2024-07-26 01:16:43.748105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.497 qpair failed and we were unable to recover it. 00:34:13.497 [2024-07-26 01:16:43.748216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.497 [2024-07-26 01:16:43.748241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.497 qpair failed and we were unable to recover it. 00:34:13.497 [2024-07-26 01:16:43.748346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.497 [2024-07-26 01:16:43.748371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.497 qpair failed and we were unable to recover it. 00:34:13.497 [2024-07-26 01:16:43.748497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.497 [2024-07-26 01:16:43.748522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.497 qpair failed and we were unable to recover it. 00:34:13.497 [2024-07-26 01:16:43.748655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.497 [2024-07-26 01:16:43.748680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.497 qpair failed and we were unable to recover it. 00:34:13.497 [2024-07-26 01:16:43.748785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.497 [2024-07-26 01:16:43.748810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.497 qpair failed and we were unable to recover it. 00:34:13.497 [2024-07-26 01:16:43.748929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.497 [2024-07-26 01:16:43.748968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.497 qpair failed and we were unable to recover it. 00:34:13.497 [2024-07-26 01:16:43.749122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.497 [2024-07-26 01:16:43.749152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.497 qpair failed and we were unable to recover it. 00:34:13.497 [2024-07-26 01:16:43.749261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.497 [2024-07-26 01:16:43.749287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.497 qpair failed and we were unable to recover it. 00:34:13.497 [2024-07-26 01:16:43.749428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.497 [2024-07-26 01:16:43.749454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.497 qpair failed and we were unable to recover it. 00:34:13.497 [2024-07-26 01:16:43.749620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.497 [2024-07-26 01:16:43.749646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.497 qpair failed and we were unable to recover it. 00:34:13.497 [2024-07-26 01:16:43.749780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.497 [2024-07-26 01:16:43.749805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.497 qpair failed and we were unable to recover it. 00:34:13.497 [2024-07-26 01:16:43.749918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.497 [2024-07-26 01:16:43.749945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.497 qpair failed and we were unable to recover it. 00:34:13.497 [2024-07-26 01:16:43.750053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.497 [2024-07-26 01:16:43.750088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.497 qpair failed and we were unable to recover it. 00:34:13.497 [2024-07-26 01:16:43.750206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.497 [2024-07-26 01:16:43.750231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.497 qpair failed and we were unable to recover it. 00:34:13.498 [2024-07-26 01:16:43.750376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.498 [2024-07-26 01:16:43.750401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.498 qpair failed and we were unable to recover it. 00:34:13.498 [2024-07-26 01:16:43.750537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.498 [2024-07-26 01:16:43.750562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.498 qpair failed and we were unable to recover it. 00:34:13.498 [2024-07-26 01:16:43.750667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.498 [2024-07-26 01:16:43.750692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.498 qpair failed and we were unable to recover it. 00:34:13.498 [2024-07-26 01:16:43.750835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.498 [2024-07-26 01:16:43.750860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.498 qpair failed and we were unable to recover it. 00:34:13.498 [2024-07-26 01:16:43.750994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.498 [2024-07-26 01:16:43.751023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.498 qpair failed and we were unable to recover it. 00:34:13.498 [2024-07-26 01:16:43.751164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.498 [2024-07-26 01:16:43.751190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.498 qpair failed and we were unable to recover it. 00:34:13.498 [2024-07-26 01:16:43.751292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.498 [2024-07-26 01:16:43.751317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.498 qpair failed and we were unable to recover it. 00:34:13.498 [2024-07-26 01:16:43.751478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.498 [2024-07-26 01:16:43.751503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.498 qpair failed and we were unable to recover it. 00:34:13.498 [2024-07-26 01:16:43.751615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.498 [2024-07-26 01:16:43.751642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.498 qpair failed and we were unable to recover it. 00:34:13.498 [2024-07-26 01:16:43.751763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.498 [2024-07-26 01:16:43.751791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.498 qpair failed and we were unable to recover it. 00:34:13.498 [2024-07-26 01:16:43.751932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.498 [2024-07-26 01:16:43.751958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.498 qpair failed and we were unable to recover it. 00:34:13.498 [2024-07-26 01:16:43.752092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.498 [2024-07-26 01:16:43.752121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.498 qpair failed and we were unable to recover it. 00:34:13.498 [2024-07-26 01:16:43.752250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.498 [2024-07-26 01:16:43.752276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.498 qpair failed and we were unable to recover it. 00:34:13.498 [2024-07-26 01:16:43.752412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.498 [2024-07-26 01:16:43.752438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.498 qpair failed and we were unable to recover it. 00:34:13.498 [2024-07-26 01:16:43.752598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.498 [2024-07-26 01:16:43.752624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.498 qpair failed and we were unable to recover it. 00:34:13.498 [2024-07-26 01:16:43.752786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.498 [2024-07-26 01:16:43.752812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.498 qpair failed and we were unable to recover it. 00:34:13.498 [2024-07-26 01:16:43.752930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.498 [2024-07-26 01:16:43.752957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.498 qpair failed and we were unable to recover it. 00:34:13.498 [2024-07-26 01:16:43.753093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.498 [2024-07-26 01:16:43.753123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.498 qpair failed and we were unable to recover it. 00:34:13.498 [2024-07-26 01:16:43.753235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.498 [2024-07-26 01:16:43.753260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.498 qpair failed and we were unable to recover it. 00:34:13.498 [2024-07-26 01:16:43.753377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.498 [2024-07-26 01:16:43.753403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.498 qpair failed and we were unable to recover it. 00:34:13.498 [2024-07-26 01:16:43.753564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.498 [2024-07-26 01:16:43.753590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.498 qpair failed and we were unable to recover it. 00:34:13.498 [2024-07-26 01:16:43.753695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.498 [2024-07-26 01:16:43.753720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.498 qpair failed and we were unable to recover it. 00:34:13.498 [2024-07-26 01:16:43.753828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.498 [2024-07-26 01:16:43.753855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.498 qpair failed and we were unable to recover it. 00:34:13.498 [2024-07-26 01:16:43.753988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.498 [2024-07-26 01:16:43.754014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.498 qpair failed and we were unable to recover it. 00:34:13.498 [2024-07-26 01:16:43.754128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.498 [2024-07-26 01:16:43.754156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.498 qpair failed and we were unable to recover it. 00:34:13.498 [2024-07-26 01:16:43.754292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.498 [2024-07-26 01:16:43.754318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.498 qpair failed and we were unable to recover it. 00:34:13.498 [2024-07-26 01:16:43.754459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.498 [2024-07-26 01:16:43.754484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.498 qpair failed and we were unable to recover it. 00:34:13.498 [2024-07-26 01:16:43.754620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.498 [2024-07-26 01:16:43.754646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.498 qpair failed and we were unable to recover it. 00:34:13.498 [2024-07-26 01:16:43.754756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.498 [2024-07-26 01:16:43.754782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.498 qpair failed and we were unable to recover it. 00:34:13.498 [2024-07-26 01:16:43.754894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.498 [2024-07-26 01:16:43.754920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.498 qpair failed and we were unable to recover it. 00:34:13.498 [2024-07-26 01:16:43.755052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.498 [2024-07-26 01:16:43.755093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.498 qpair failed and we were unable to recover it. 00:34:13.498 [2024-07-26 01:16:43.755211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.498 [2024-07-26 01:16:43.755237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.498 qpair failed and we were unable to recover it. 00:34:13.498 [2024-07-26 01:16:43.755396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.498 [2024-07-26 01:16:43.755422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.498 qpair failed and we were unable to recover it. 00:34:13.498 [2024-07-26 01:16:43.755543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.498 [2024-07-26 01:16:43.755567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.498 qpair failed and we were unable to recover it. 00:34:13.498 [2024-07-26 01:16:43.755705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.498 [2024-07-26 01:16:43.755730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.498 qpair failed and we were unable to recover it. 00:34:13.498 [2024-07-26 01:16:43.755839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.498 [2024-07-26 01:16:43.755864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.498 qpair failed and we were unable to recover it. 00:34:13.498 [2024-07-26 01:16:43.755976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.498 [2024-07-26 01:16:43.756001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.498 qpair failed and we were unable to recover it. 00:34:13.498 [2024-07-26 01:16:43.756138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.498 [2024-07-26 01:16:43.756165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.498 qpair failed and we were unable to recover it. 00:34:13.499 [2024-07-26 01:16:43.756293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.499 [2024-07-26 01:16:43.756318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.499 qpair failed and we were unable to recover it. 00:34:13.499 [2024-07-26 01:16:43.756454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.499 [2024-07-26 01:16:43.756479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.499 qpair failed and we were unable to recover it. 00:34:13.499 [2024-07-26 01:16:43.756588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.499 [2024-07-26 01:16:43.756614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.499 qpair failed and we were unable to recover it. 00:34:13.499 [2024-07-26 01:16:43.756724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.499 [2024-07-26 01:16:43.756749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.499 qpair failed and we were unable to recover it. 00:34:13.499 [2024-07-26 01:16:43.756880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.499 [2024-07-26 01:16:43.756905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.499 qpair failed and we were unable to recover it. 00:34:13.499 [2024-07-26 01:16:43.757016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.499 [2024-07-26 01:16:43.757041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.499 qpair failed and we were unable to recover it. 00:34:13.499 [2024-07-26 01:16:43.757169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.499 [2024-07-26 01:16:43.757194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.499 qpair failed and we were unable to recover it. 00:34:13.499 [2024-07-26 01:16:43.757307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.499 [2024-07-26 01:16:43.757335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.499 qpair failed and we were unable to recover it. 00:34:13.499 [2024-07-26 01:16:43.757511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.499 [2024-07-26 01:16:43.757536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.499 qpair failed and we were unable to recover it. 00:34:13.499 [2024-07-26 01:16:43.757672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.499 [2024-07-26 01:16:43.757699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.499 qpair failed and we were unable to recover it. 00:34:13.499 [2024-07-26 01:16:43.757808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.499 [2024-07-26 01:16:43.757833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.499 qpair failed and we were unable to recover it. 00:34:13.499 [2024-07-26 01:16:43.757997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.499 [2024-07-26 01:16:43.758022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.499 qpair failed and we were unable to recover it. 00:34:13.499 [2024-07-26 01:16:43.758129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.499 [2024-07-26 01:16:43.758155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.499 qpair failed and we were unable to recover it. 00:34:13.499 [2024-07-26 01:16:43.758287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.499 [2024-07-26 01:16:43.758312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.499 qpair failed and we were unable to recover it. 00:34:13.499 [2024-07-26 01:16:43.758453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.499 [2024-07-26 01:16:43.758478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.499 qpair failed and we were unable to recover it. 00:34:13.499 [2024-07-26 01:16:43.758583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.499 [2024-07-26 01:16:43.758608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.499 qpair failed and we were unable to recover it. 00:34:13.499 [2024-07-26 01:16:43.758711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.499 [2024-07-26 01:16:43.758736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.499 qpair failed and we were unable to recover it. 00:34:13.499 [2024-07-26 01:16:43.758872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.499 [2024-07-26 01:16:43.758897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.499 qpair failed and we were unable to recover it. 00:34:13.499 [2024-07-26 01:16:43.759026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.499 [2024-07-26 01:16:43.759051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.499 qpair failed and we were unable to recover it. 00:34:13.499 [2024-07-26 01:16:43.759165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.499 [2024-07-26 01:16:43.759190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.499 qpair failed and we were unable to recover it. 00:34:13.499 [2024-07-26 01:16:43.759293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.499 [2024-07-26 01:16:43.759323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.499 qpair failed and we were unable to recover it. 00:34:13.499 [2024-07-26 01:16:43.759485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.499 [2024-07-26 01:16:43.759510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.499 qpair failed and we were unable to recover it. 00:34:13.499 [2024-07-26 01:16:43.759621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.499 [2024-07-26 01:16:43.759646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.499 qpair failed and we were unable to recover it. 00:34:13.499 [2024-07-26 01:16:43.759785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.499 [2024-07-26 01:16:43.759810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.499 qpair failed and we were unable to recover it. 00:34:13.499 [2024-07-26 01:16:43.759909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.499 [2024-07-26 01:16:43.759935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.499 qpair failed and we were unable to recover it. 00:34:13.499 [2024-07-26 01:16:43.760028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.499 [2024-07-26 01:16:43.760053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.499 qpair failed and we were unable to recover it. 00:34:13.499 [2024-07-26 01:16:43.760230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.499 [2024-07-26 01:16:43.760255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.499 qpair failed and we were unable to recover it. 00:34:13.499 [2024-07-26 01:16:43.760384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.499 [2024-07-26 01:16:43.760409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.499 qpair failed and we were unable to recover it. 00:34:13.499 [2024-07-26 01:16:43.760542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.499 [2024-07-26 01:16:43.760567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.499 qpair failed and we were unable to recover it. 00:34:13.499 [2024-07-26 01:16:43.760707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.499 [2024-07-26 01:16:43.760733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.499 qpair failed and we were unable to recover it. 00:34:13.499 [2024-07-26 01:16:43.760869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.499 [2024-07-26 01:16:43.760894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.499 qpair failed and we were unable to recover it. 00:34:13.499 [2024-07-26 01:16:43.760996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.499 [2024-07-26 01:16:43.761021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.499 qpair failed and we were unable to recover it. 00:34:13.499 [2024-07-26 01:16:43.761192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.499 [2024-07-26 01:16:43.761218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.499 qpair failed and we were unable to recover it. 00:34:13.499 [2024-07-26 01:16:43.761328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.500 [2024-07-26 01:16:43.761353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.500 qpair failed and we were unable to recover it. 00:34:13.500 [2024-07-26 01:16:43.761514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.500 [2024-07-26 01:16:43.761539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.500 qpair failed and we were unable to recover it. 00:34:13.500 [2024-07-26 01:16:43.761687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.500 [2024-07-26 01:16:43.761713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.500 qpair failed and we were unable to recover it. 00:34:13.500 [2024-07-26 01:16:43.761855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.500 [2024-07-26 01:16:43.761880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.500 qpair failed and we were unable to recover it. 00:34:13.500 [2024-07-26 01:16:43.762011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.500 [2024-07-26 01:16:43.762036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.500 qpair failed and we were unable to recover it. 00:34:13.500 [2024-07-26 01:16:43.762169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.500 [2024-07-26 01:16:43.762194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.500 qpair failed and we were unable to recover it. 00:34:13.500 [2024-07-26 01:16:43.762365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.500 [2024-07-26 01:16:43.762390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.500 qpair failed and we were unable to recover it. 00:34:13.500 [2024-07-26 01:16:43.762494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.500 [2024-07-26 01:16:43.762519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.500 qpair failed and we were unable to recover it. 00:34:13.500 [2024-07-26 01:16:43.762654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.500 [2024-07-26 01:16:43.762679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.500 qpair failed and we were unable to recover it. 00:34:13.500 [2024-07-26 01:16:43.762799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.500 [2024-07-26 01:16:43.762823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.500 qpair failed and we were unable to recover it. 00:34:13.500 [2024-07-26 01:16:43.762951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.500 [2024-07-26 01:16:43.762976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.500 qpair failed and we were unable to recover it. 00:34:13.500 [2024-07-26 01:16:43.763091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.500 [2024-07-26 01:16:43.763121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.500 qpair failed and we were unable to recover it. 00:34:13.500 [2024-07-26 01:16:43.763232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.500 [2024-07-26 01:16:43.763257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.500 qpair failed and we were unable to recover it. 00:34:13.500 [2024-07-26 01:16:43.763388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.500 [2024-07-26 01:16:43.763413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.500 qpair failed and we were unable to recover it. 00:34:13.500 [2024-07-26 01:16:43.763515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.500 [2024-07-26 01:16:43.763540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.500 qpair failed and we were unable to recover it. 00:34:13.500 [2024-07-26 01:16:43.763659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.500 [2024-07-26 01:16:43.763684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.500 qpair failed and we were unable to recover it. 00:34:13.500 [2024-07-26 01:16:43.763829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.500 [2024-07-26 01:16:43.763868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.500 qpair failed and we were unable to recover it. 00:34:13.500 [2024-07-26 01:16:43.764013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.500 [2024-07-26 01:16:43.764041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.500 qpair failed and we were unable to recover it. 00:34:13.500 [2024-07-26 01:16:43.764196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.500 [2024-07-26 01:16:43.764223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.500 qpair failed and we were unable to recover it. 00:34:13.500 [2024-07-26 01:16:43.764380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.500 [2024-07-26 01:16:43.764406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.500 qpair failed and we were unable to recover it. 00:34:13.500 [2024-07-26 01:16:43.764539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.500 [2024-07-26 01:16:43.764565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.500 qpair failed and we were unable to recover it. 00:34:13.500 [2024-07-26 01:16:43.764676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.500 [2024-07-26 01:16:43.764703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.500 qpair failed and we were unable to recover it. 00:34:13.500 [2024-07-26 01:16:43.764805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.500 [2024-07-26 01:16:43.764830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.500 qpair failed and we were unable to recover it. 00:34:13.500 [2024-07-26 01:16:43.764945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.500 [2024-07-26 01:16:43.764971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.500 qpair failed and we were unable to recover it. 00:34:13.500 [2024-07-26 01:16:43.765085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.500 [2024-07-26 01:16:43.765114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.500 qpair failed and we were unable to recover it. 00:34:13.500 [2024-07-26 01:16:43.765247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.500 [2024-07-26 01:16:43.765272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.500 qpair failed and we were unable to recover it. 00:34:13.500 [2024-07-26 01:16:43.765419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.500 [2024-07-26 01:16:43.765445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.500 qpair failed and we were unable to recover it. 00:34:13.500 [2024-07-26 01:16:43.765565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.500 [2024-07-26 01:16:43.765590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.500 qpair failed and we were unable to recover it. 00:34:13.500 [2024-07-26 01:16:43.765703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.500 [2024-07-26 01:16:43.765730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.500 qpair failed and we were unable to recover it. 00:34:13.500 [2024-07-26 01:16:43.765844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.500 [2024-07-26 01:16:43.765871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.500 qpair failed and we were unable to recover it. 00:34:13.500 [2024-07-26 01:16:43.765984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.500 [2024-07-26 01:16:43.766010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.500 qpair failed and we were unable to recover it. 00:34:13.500 [2024-07-26 01:16:43.766155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.500 [2024-07-26 01:16:43.766182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.500 qpair failed and we were unable to recover it. 00:34:13.500 [2024-07-26 01:16:43.766294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.500 [2024-07-26 01:16:43.766321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.500 qpair failed and we were unable to recover it. 00:34:13.500 [2024-07-26 01:16:43.766466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.500 [2024-07-26 01:16:43.766494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.500 qpair failed and we were unable to recover it. 00:34:13.500 [2024-07-26 01:16:43.766601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.500 [2024-07-26 01:16:43.766627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.500 qpair failed and we were unable to recover it. 00:34:13.500 [2024-07-26 01:16:43.766740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.500 [2024-07-26 01:16:43.766765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.500 qpair failed and we were unable to recover it. 00:34:13.500 [2024-07-26 01:16:43.766896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.500 [2024-07-26 01:16:43.766921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.500 qpair failed and we were unable to recover it. 00:34:13.500 [2024-07-26 01:16:43.767031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.500 [2024-07-26 01:16:43.767055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.501 qpair failed and we were unable to recover it. 00:34:13.501 [2024-07-26 01:16:43.767202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.501 [2024-07-26 01:16:43.767227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.501 qpair failed and we were unable to recover it. 00:34:13.501 [2024-07-26 01:16:43.767359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.501 [2024-07-26 01:16:43.767384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.501 qpair failed and we were unable to recover it. 00:34:13.501 [2024-07-26 01:16:43.767485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.501 [2024-07-26 01:16:43.767510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.501 qpair failed and we were unable to recover it. 00:34:13.501 [2024-07-26 01:16:43.767639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.501 [2024-07-26 01:16:43.767665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.501 qpair failed and we were unable to recover it. 00:34:13.501 [2024-07-26 01:16:43.767800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.501 [2024-07-26 01:16:43.767825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.501 qpair failed and we were unable to recover it. 00:34:13.501 [2024-07-26 01:16:43.767969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.501 [2024-07-26 01:16:43.768008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.501 qpair failed and we were unable to recover it. 00:34:13.501 [2024-07-26 01:16:43.768156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.501 [2024-07-26 01:16:43.768184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.501 qpair failed and we were unable to recover it. 00:34:13.501 [2024-07-26 01:16:43.768286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.501 [2024-07-26 01:16:43.768312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.501 qpair failed and we were unable to recover it. 00:34:13.501 [2024-07-26 01:16:43.768475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.501 [2024-07-26 01:16:43.768501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.501 qpair failed and we were unable to recover it. 00:34:13.501 [2024-07-26 01:16:43.768616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.501 [2024-07-26 01:16:43.768642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.501 qpair failed and we were unable to recover it. 00:34:13.501 [2024-07-26 01:16:43.768779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.501 [2024-07-26 01:16:43.768804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.501 qpair failed and we were unable to recover it. 00:34:13.501 [2024-07-26 01:16:43.768917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.501 [2024-07-26 01:16:43.768944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.501 qpair failed and we were unable to recover it. 00:34:13.501 [2024-07-26 01:16:43.769057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.501 [2024-07-26 01:16:43.769089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.501 qpair failed and we were unable to recover it. 00:34:13.501 [2024-07-26 01:16:43.769222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.501 [2024-07-26 01:16:43.769247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.501 qpair failed and we were unable to recover it. 00:34:13.501 [2024-07-26 01:16:43.769386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.501 [2024-07-26 01:16:43.769411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.501 qpair failed and we were unable to recover it. 00:34:13.501 [2024-07-26 01:16:43.769552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.501 [2024-07-26 01:16:43.769578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.501 qpair failed and we were unable to recover it. 00:34:13.501 [2024-07-26 01:16:43.769684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.501 [2024-07-26 01:16:43.769709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.501 qpair failed and we were unable to recover it. 00:34:13.501 [2024-07-26 01:16:43.769822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.501 [2024-07-26 01:16:43.769850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.501 qpair failed and we were unable to recover it. 00:34:13.501 [2024-07-26 01:16:43.770011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.501 [2024-07-26 01:16:43.770037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.501 qpair failed and we were unable to recover it. 00:34:13.501 [2024-07-26 01:16:43.770203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.501 [2024-07-26 01:16:43.770230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.501 qpair failed and we were unable to recover it. 00:34:13.501 [2024-07-26 01:16:43.770340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.501 [2024-07-26 01:16:43.770366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.501 qpair failed and we were unable to recover it. 00:34:13.501 [2024-07-26 01:16:43.770500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.501 [2024-07-26 01:16:43.770526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.501 qpair failed and we were unable to recover it. 00:34:13.501 [2024-07-26 01:16:43.770659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.501 [2024-07-26 01:16:43.770684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.501 qpair failed and we were unable to recover it. 00:34:13.501 [2024-07-26 01:16:43.770788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.501 [2024-07-26 01:16:43.770814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.501 qpair failed and we were unable to recover it. 00:34:13.501 [2024-07-26 01:16:43.770955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.501 [2024-07-26 01:16:43.770980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.501 qpair failed and we were unable to recover it. 00:34:13.501 [2024-07-26 01:16:43.771114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.501 [2024-07-26 01:16:43.771140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.501 qpair failed and we were unable to recover it. 00:34:13.501 [2024-07-26 01:16:43.771252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.501 [2024-07-26 01:16:43.771277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.501 qpair failed and we were unable to recover it. 00:34:13.501 [2024-07-26 01:16:43.771399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.501 [2024-07-26 01:16:43.771425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.501 qpair failed and we were unable to recover it. 00:34:13.501 [2024-07-26 01:16:43.771552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.501 [2024-07-26 01:16:43.771577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.501 qpair failed and we were unable to recover it. 00:34:13.501 [2024-07-26 01:16:43.771679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.501 [2024-07-26 01:16:43.771704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.501 qpair failed and we were unable to recover it. 00:34:13.501 [2024-07-26 01:16:43.771834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.501 [2024-07-26 01:16:43.771859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.501 qpair failed and we were unable to recover it. 00:34:13.501 [2024-07-26 01:16:43.771999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.501 [2024-07-26 01:16:43.772024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.501 qpair failed and we were unable to recover it. 00:34:13.501 [2024-07-26 01:16:43.772165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.501 [2024-07-26 01:16:43.772193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.501 qpair failed and we were unable to recover it. 00:34:13.501 [2024-07-26 01:16:43.772309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.501 [2024-07-26 01:16:43.772335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.501 qpair failed and we were unable to recover it. 00:34:13.501 [2024-07-26 01:16:43.772469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.501 [2024-07-26 01:16:43.772494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.501 qpair failed and we were unable to recover it. 00:34:13.501 [2024-07-26 01:16:43.772656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.501 [2024-07-26 01:16:43.772682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.501 qpair failed and we were unable to recover it. 00:34:13.501 [2024-07-26 01:16:43.772831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.501 [2024-07-26 01:16:43.772857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.501 qpair failed and we were unable to recover it. 00:34:13.501 [2024-07-26 01:16:43.773000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.502 [2024-07-26 01:16:43.773025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.502 qpair failed and we were unable to recover it. 00:34:13.502 [2024-07-26 01:16:43.773143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.502 [2024-07-26 01:16:43.773169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.502 qpair failed and we were unable to recover it. 00:34:13.502 [2024-07-26 01:16:43.773282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.502 [2024-07-26 01:16:43.773307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.502 qpair failed and we were unable to recover it. 00:34:13.502 [2024-07-26 01:16:43.773449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.502 [2024-07-26 01:16:43.773474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.502 qpair failed and we were unable to recover it. 00:34:13.502 [2024-07-26 01:16:43.773584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.502 [2024-07-26 01:16:43.773609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.502 qpair failed and we were unable to recover it. 00:34:13.502 [2024-07-26 01:16:43.773744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.502 [2024-07-26 01:16:43.773769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.502 qpair failed and we were unable to recover it. 00:34:13.502 [2024-07-26 01:16:43.773879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.502 [2024-07-26 01:16:43.773906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.502 qpair failed and we were unable to recover it. 00:34:13.502 [2024-07-26 01:16:43.774067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.502 [2024-07-26 01:16:43.774096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.502 qpair failed and we were unable to recover it. 00:34:13.502 [2024-07-26 01:16:43.774252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.502 [2024-07-26 01:16:43.774279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.502 qpair failed and we were unable to recover it. 00:34:13.502 [2024-07-26 01:16:43.774416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.502 [2024-07-26 01:16:43.774441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.502 qpair failed and we were unable to recover it. 00:34:13.502 [2024-07-26 01:16:43.774549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.502 [2024-07-26 01:16:43.774576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.502 qpair failed and we were unable to recover it. 00:34:13.502 [2024-07-26 01:16:43.774711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.502 [2024-07-26 01:16:43.774737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.502 qpair failed and we were unable to recover it. 00:34:13.502 [2024-07-26 01:16:43.774849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.502 [2024-07-26 01:16:43.774875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.502 qpair failed and we were unable to recover it. 00:34:13.502 [2024-07-26 01:16:43.775006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.502 [2024-07-26 01:16:43.775031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.502 qpair failed and we were unable to recover it. 00:34:13.502 [2024-07-26 01:16:43.775157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.502 [2024-07-26 01:16:43.775184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.502 qpair failed and we were unable to recover it. 00:34:13.502 [2024-07-26 01:16:43.775346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.502 [2024-07-26 01:16:43.775372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.502 qpair failed and we were unable to recover it. 00:34:13.502 [2024-07-26 01:16:43.775542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.502 [2024-07-26 01:16:43.775568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.502 qpair failed and we were unable to recover it. 00:34:13.502 [2024-07-26 01:16:43.775729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.502 [2024-07-26 01:16:43.775755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.502 qpair failed and we were unable to recover it. 00:34:13.502 [2024-07-26 01:16:43.775891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.502 [2024-07-26 01:16:43.775916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.502 qpair failed and we were unable to recover it. 00:34:13.502 [2024-07-26 01:16:43.776073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.502 [2024-07-26 01:16:43.776100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.502 qpair failed and we were unable to recover it. 00:34:13.502 [2024-07-26 01:16:43.776235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.502 [2024-07-26 01:16:43.776265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.502 qpair failed and we were unable to recover it. 00:34:13.502 [2024-07-26 01:16:43.776426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.502 [2024-07-26 01:16:43.776452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.502 qpair failed and we were unable to recover it. 00:34:13.502 [2024-07-26 01:16:43.776583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.502 [2024-07-26 01:16:43.776609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.502 qpair failed and we were unable to recover it. 00:34:13.502 [2024-07-26 01:16:43.776750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.502 [2024-07-26 01:16:43.776776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.502 qpair failed and we were unable to recover it. 00:34:13.502 [2024-07-26 01:16:43.776937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.502 [2024-07-26 01:16:43.776962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.502 qpair failed and we were unable to recover it. 00:34:13.502 [2024-07-26 01:16:43.777076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.502 [2024-07-26 01:16:43.777104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.502 qpair failed and we were unable to recover it. 00:34:13.502 [2024-07-26 01:16:43.777221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.502 [2024-07-26 01:16:43.777247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.502 qpair failed and we were unable to recover it. 00:34:13.502 [2024-07-26 01:16:43.777380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.502 [2024-07-26 01:16:43.777405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.502 qpair failed and we were unable to recover it. 00:34:13.502 [2024-07-26 01:16:43.777542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.502 [2024-07-26 01:16:43.777567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.502 qpair failed and we were unable to recover it. 00:34:13.502 [2024-07-26 01:16:43.777683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.502 [2024-07-26 01:16:43.777708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.502 qpair failed and we were unable to recover it. 00:34:13.502 [2024-07-26 01:16:43.777842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.502 [2024-07-26 01:16:43.777867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.502 qpair failed and we were unable to recover it. 00:34:13.502 [2024-07-26 01:16:43.777994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.502 [2024-07-26 01:16:43.778019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.502 qpair failed and we were unable to recover it. 00:34:13.502 [2024-07-26 01:16:43.778165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.502 [2024-07-26 01:16:43.778191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.502 qpair failed and we were unable to recover it. 00:34:13.502 [2024-07-26 01:16:43.778328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.502 [2024-07-26 01:16:43.778353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.502 qpair failed and we were unable to recover it. 00:34:13.502 [2024-07-26 01:16:43.778467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.502 [2024-07-26 01:16:43.778492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.502 qpair failed and we were unable to recover it. 00:34:13.502 [2024-07-26 01:16:43.778602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.502 [2024-07-26 01:16:43.778627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.502 qpair failed and we were unable to recover it. 00:34:13.502 [2024-07-26 01:16:43.778757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.502 [2024-07-26 01:16:43.778782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.502 qpair failed and we were unable to recover it. 00:34:13.502 [2024-07-26 01:16:43.778892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.502 [2024-07-26 01:16:43.778917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.502 qpair failed and we were unable to recover it. 00:34:13.503 [2024-07-26 01:16:43.779050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.503 [2024-07-26 01:16:43.779082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.503 qpair failed and we were unable to recover it. 00:34:13.503 [2024-07-26 01:16:43.779210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.503 [2024-07-26 01:16:43.779235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.503 qpair failed and we were unable to recover it. 00:34:13.503 [2024-07-26 01:16:43.779364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.503 [2024-07-26 01:16:43.779388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.503 qpair failed and we were unable to recover it. 00:34:13.503 [2024-07-26 01:16:43.779513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.503 [2024-07-26 01:16:43.779538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.503 qpair failed and we were unable to recover it. 00:34:13.503 [2024-07-26 01:16:43.779677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.503 [2024-07-26 01:16:43.779716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.503 qpair failed and we were unable to recover it. 00:34:13.503 [2024-07-26 01:16:43.779842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.503 [2024-07-26 01:16:43.779882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.503 qpair failed and we were unable to recover it. 00:34:13.503 [2024-07-26 01:16:43.780050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.503 [2024-07-26 01:16:43.780086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.503 qpair failed and we were unable to recover it. 00:34:13.503 [2024-07-26 01:16:43.780224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.503 [2024-07-26 01:16:43.780249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.503 qpair failed and we were unable to recover it. 00:34:13.503 [2024-07-26 01:16:43.780357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.503 [2024-07-26 01:16:43.780383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.503 qpair failed and we were unable to recover it. 00:34:13.503 [2024-07-26 01:16:43.780523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.503 [2024-07-26 01:16:43.780554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.503 qpair failed and we were unable to recover it. 00:34:13.503 [2024-07-26 01:16:43.780683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.503 [2024-07-26 01:16:43.780709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.503 qpair failed and we were unable to recover it. 00:34:13.503 [2024-07-26 01:16:43.780879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.503 [2024-07-26 01:16:43.780904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.503 qpair failed and we were unable to recover it. 00:34:13.503 [2024-07-26 01:16:43.781042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.503 [2024-07-26 01:16:43.781075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.503 qpair failed and we were unable to recover it. 00:34:13.503 [2024-07-26 01:16:43.781188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.503 [2024-07-26 01:16:43.781215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.503 qpair failed and we were unable to recover it. 00:34:13.503 [2024-07-26 01:16:43.781351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.503 [2024-07-26 01:16:43.781377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.503 qpair failed and we were unable to recover it. 00:34:13.503 [2024-07-26 01:16:43.781535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.503 [2024-07-26 01:16:43.781562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.503 qpair failed and we were unable to recover it. 00:34:13.503 [2024-07-26 01:16:43.781698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.503 [2024-07-26 01:16:43.781726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.503 qpair failed and we were unable to recover it. 00:34:13.503 [2024-07-26 01:16:43.781875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.503 [2024-07-26 01:16:43.781914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.503 qpair failed and we were unable to recover it. 00:34:13.503 [2024-07-26 01:16:43.782069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.503 [2024-07-26 01:16:43.782099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.503 qpair failed and we were unable to recover it. 00:34:13.503 [2024-07-26 01:16:43.782242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.503 [2024-07-26 01:16:43.782268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.503 qpair failed and we were unable to recover it. 00:34:13.503 [2024-07-26 01:16:43.782376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.503 [2024-07-26 01:16:43.782403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.503 qpair failed and we were unable to recover it. 00:34:13.503 [2024-07-26 01:16:43.782517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.503 [2024-07-26 01:16:43.782544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.503 qpair failed and we were unable to recover it. 00:34:13.503 [2024-07-26 01:16:43.782681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.503 [2024-07-26 01:16:43.782712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.503 qpair failed and we were unable to recover it. 00:34:13.503 [2024-07-26 01:16:43.782834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.503 [2024-07-26 01:16:43.782860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.503 qpair failed and we were unable to recover it. 00:34:13.503 [2024-07-26 01:16:43.783023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.503 [2024-07-26 01:16:43.783049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.503 qpair failed and we were unable to recover it. 00:34:13.503 [2024-07-26 01:16:43.783163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.503 [2024-07-26 01:16:43.783188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.503 qpair failed and we were unable to recover it. 00:34:13.503 [2024-07-26 01:16:43.783297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.503 [2024-07-26 01:16:43.783322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.503 qpair failed and we were unable to recover it. 00:34:13.503 [2024-07-26 01:16:43.783460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.503 [2024-07-26 01:16:43.783485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.503 qpair failed and we were unable to recover it. 00:34:13.503 [2024-07-26 01:16:43.783586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.503 [2024-07-26 01:16:43.783611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.503 qpair failed and we were unable to recover it. 00:34:13.503 [2024-07-26 01:16:43.783748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.503 [2024-07-26 01:16:43.783772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.503 qpair failed and we were unable to recover it. 00:34:13.503 [2024-07-26 01:16:43.783912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.503 [2024-07-26 01:16:43.783937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.503 qpair failed and we were unable to recover it. 00:34:13.503 [2024-07-26 01:16:43.784044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.503 [2024-07-26 01:16:43.784081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.503 qpair failed and we were unable to recover it. 00:34:13.503 [2024-07-26 01:16:43.784221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.503 [2024-07-26 01:16:43.784247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.503 qpair failed and we were unable to recover it. 00:34:13.503 [2024-07-26 01:16:43.784384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.503 [2024-07-26 01:16:43.784410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.503 qpair failed and we were unable to recover it. 00:34:13.503 [2024-07-26 01:16:43.784520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.503 [2024-07-26 01:16:43.784547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.503 qpair failed and we were unable to recover it. 00:34:13.503 [2024-07-26 01:16:43.784680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.503 [2024-07-26 01:16:43.784706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.503 qpair failed and we were unable to recover it. 00:34:13.503 [2024-07-26 01:16:43.784811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.503 [2024-07-26 01:16:43.784841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.503 qpair failed and we were unable to recover it. 00:34:13.503 [2024-07-26 01:16:43.784959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.503 [2024-07-26 01:16:43.784986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.503 qpair failed and we were unable to recover it. 00:34:13.504 [2024-07-26 01:16:43.785092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.504 [2024-07-26 01:16:43.785117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.504 qpair failed and we were unable to recover it. 00:34:13.504 [2024-07-26 01:16:43.785230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.504 [2024-07-26 01:16:43.785255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.504 qpair failed and we were unable to recover it. 00:34:13.504 [2024-07-26 01:16:43.785361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.504 [2024-07-26 01:16:43.785387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.504 qpair failed and we were unable to recover it. 00:34:13.504 [2024-07-26 01:16:43.785516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.504 [2024-07-26 01:16:43.785541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.504 qpair failed and we were unable to recover it. 00:34:13.504 [2024-07-26 01:16:43.785702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.504 [2024-07-26 01:16:43.785728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.504 qpair failed and we were unable to recover it. 00:34:13.504 [2024-07-26 01:16:43.785822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.504 [2024-07-26 01:16:43.785847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.504 qpair failed and we were unable to recover it. 00:34:13.504 [2024-07-26 01:16:43.785996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.504 [2024-07-26 01:16:43.786021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.504 qpair failed and we were unable to recover it. 00:34:13.504 [2024-07-26 01:16:43.786161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.504 [2024-07-26 01:16:43.786187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.504 qpair failed and we were unable to recover it. 00:34:13.504 [2024-07-26 01:16:43.786295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.504 [2024-07-26 01:16:43.786319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.504 qpair failed and we were unable to recover it. 00:34:13.504 [2024-07-26 01:16:43.786423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.504 [2024-07-26 01:16:43.786448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.504 qpair failed and we were unable to recover it. 00:34:13.504 [2024-07-26 01:16:43.786584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.504 [2024-07-26 01:16:43.786610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.504 qpair failed and we were unable to recover it. 00:34:13.504 [2024-07-26 01:16:43.786741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.504 [2024-07-26 01:16:43.786766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.504 qpair failed and we were unable to recover it. 00:34:13.504 [2024-07-26 01:16:43.786882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.504 [2024-07-26 01:16:43.786907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.504 qpair failed and we were unable to recover it. 00:34:13.504 [2024-07-26 01:16:43.787079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.504 [2024-07-26 01:16:43.787104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.504 qpair failed and we were unable to recover it. 00:34:13.504 [2024-07-26 01:16:43.787220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.504 [2024-07-26 01:16:43.787245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.504 qpair failed and we were unable to recover it. 00:34:13.504 [2024-07-26 01:16:43.787376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.504 [2024-07-26 01:16:43.787401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.504 qpair failed and we were unable to recover it. 00:34:13.504 [2024-07-26 01:16:43.787538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.504 [2024-07-26 01:16:43.787563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.504 qpair failed and we were unable to recover it. 00:34:13.504 [2024-07-26 01:16:43.787670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.504 [2024-07-26 01:16:43.787695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.504 qpair failed and we were unable to recover it. 00:34:13.504 [2024-07-26 01:16:43.787794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.504 [2024-07-26 01:16:43.787819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.504 qpair failed and we were unable to recover it. 00:34:13.504 [2024-07-26 01:16:43.787928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.504 [2024-07-26 01:16:43.787954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.504 qpair failed and we were unable to recover it. 00:34:13.504 [2024-07-26 01:16:43.788077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.504 [2024-07-26 01:16:43.788116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.504 qpair failed and we were unable to recover it. 00:34:13.504 [2024-07-26 01:16:43.788260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.504 [2024-07-26 01:16:43.788287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.504 qpair failed and we were unable to recover it. 00:34:13.504 [2024-07-26 01:16:43.788452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.504 [2024-07-26 01:16:43.788479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.504 qpair failed and we were unable to recover it. 00:34:13.504 [2024-07-26 01:16:43.788613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.504 [2024-07-26 01:16:43.788638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.504 qpair failed and we were unable to recover it. 00:34:13.504 [2024-07-26 01:16:43.788788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.504 [2024-07-26 01:16:43.788813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.504 qpair failed and we were unable to recover it. 00:34:13.504 [2024-07-26 01:16:43.788975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.504 [2024-07-26 01:16:43.789006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.504 qpair failed and we were unable to recover it. 00:34:13.504 [2024-07-26 01:16:43.789155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.504 [2024-07-26 01:16:43.789182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.504 qpair failed and we were unable to recover it. 00:34:13.504 [2024-07-26 01:16:43.789321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.504 [2024-07-26 01:16:43.789346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.504 qpair failed and we were unable to recover it. 00:34:13.504 [2024-07-26 01:16:43.789465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.504 [2024-07-26 01:16:43.789491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.504 qpair failed and we were unable to recover it. 00:34:13.504 [2024-07-26 01:16:43.789596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.504 [2024-07-26 01:16:43.789620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.504 qpair failed and we were unable to recover it. 00:34:13.504 [2024-07-26 01:16:43.789754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.504 [2024-07-26 01:16:43.789779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.504 qpair failed and we were unable to recover it. 00:34:13.504 [2024-07-26 01:16:43.789942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.504 [2024-07-26 01:16:43.789967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.504 qpair failed and we were unable to recover it. 00:34:13.504 [2024-07-26 01:16:43.790103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.504 [2024-07-26 01:16:43.790130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.504 qpair failed and we were unable to recover it. 00:34:13.505 [2024-07-26 01:16:43.790259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.505 [2024-07-26 01:16:43.790284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.505 qpair failed and we were unable to recover it. 00:34:13.505 [2024-07-26 01:16:43.790456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.505 [2024-07-26 01:16:43.790482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.505 qpair failed and we were unable to recover it. 00:34:13.505 [2024-07-26 01:16:43.790587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.505 [2024-07-26 01:16:43.790611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.505 qpair failed and we were unable to recover it. 00:34:13.505 [2024-07-26 01:16:43.790754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.505 [2024-07-26 01:16:43.790779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.505 qpair failed and we were unable to recover it. 00:34:13.505 [2024-07-26 01:16:43.790911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.505 [2024-07-26 01:16:43.790937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.505 qpair failed and we were unable to recover it. 00:34:13.505 [2024-07-26 01:16:43.791041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.505 [2024-07-26 01:16:43.791072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.505 qpair failed and we were unable to recover it. 00:34:13.505 [2024-07-26 01:16:43.791215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.505 [2024-07-26 01:16:43.791240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.505 qpair failed and we were unable to recover it. 00:34:13.505 [2024-07-26 01:16:43.791356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.505 [2024-07-26 01:16:43.791381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.505 qpair failed and we were unable to recover it. 00:34:13.505 [2024-07-26 01:16:43.791488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.505 [2024-07-26 01:16:43.791513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.505 qpair failed and we were unable to recover it. 00:34:13.505 [2024-07-26 01:16:43.791626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.505 [2024-07-26 01:16:43.791653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.505 qpair failed and we were unable to recover it. 00:34:13.505 [2024-07-26 01:16:43.791760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.505 [2024-07-26 01:16:43.791786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.505 qpair failed and we were unable to recover it. 00:34:13.505 [2024-07-26 01:16:43.791923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.505 [2024-07-26 01:16:43.791948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.505 qpair failed and we were unable to recover it. 00:34:13.505 [2024-07-26 01:16:43.792084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.505 [2024-07-26 01:16:43.792109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.505 qpair failed and we were unable to recover it. 00:34:13.505 [2024-07-26 01:16:43.792268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.505 [2024-07-26 01:16:43.792294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.505 qpair failed and we were unable to recover it. 00:34:13.505 [2024-07-26 01:16:43.792425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.505 [2024-07-26 01:16:43.792450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.505 qpair failed and we were unable to recover it. 00:34:13.505 [2024-07-26 01:16:43.792584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.505 [2024-07-26 01:16:43.792609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.505 qpair failed and we were unable to recover it. 00:34:13.505 [2024-07-26 01:16:43.792746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.505 [2024-07-26 01:16:43.792771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.505 qpair failed and we were unable to recover it. 00:34:13.505 [2024-07-26 01:16:43.792905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.505 [2024-07-26 01:16:43.792931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.505 qpair failed and we were unable to recover it. 00:34:13.505 [2024-07-26 01:16:43.793072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.505 [2024-07-26 01:16:43.793098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.505 qpair failed and we were unable to recover it. 00:34:13.505 [2024-07-26 01:16:43.793231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.505 [2024-07-26 01:16:43.793257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.505 qpair failed and we were unable to recover it. 00:34:13.505 [2024-07-26 01:16:43.793416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.505 [2024-07-26 01:16:43.793455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.505 qpair failed and we were unable to recover it. 00:34:13.505 [2024-07-26 01:16:43.793589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.505 [2024-07-26 01:16:43.793616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.505 qpair failed and we were unable to recover it. 00:34:13.505 [2024-07-26 01:16:43.793779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.505 [2024-07-26 01:16:43.793806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.505 qpair failed and we were unable to recover it. 00:34:13.505 [2024-07-26 01:16:43.793945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.505 [2024-07-26 01:16:43.793971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.505 qpair failed and we were unable to recover it. 00:34:13.505 [2024-07-26 01:16:43.794108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.505 [2024-07-26 01:16:43.794135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.505 qpair failed and we were unable to recover it. 00:34:13.505 [2024-07-26 01:16:43.794257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.505 [2024-07-26 01:16:43.794284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.505 qpair failed and we were unable to recover it. 00:34:13.505 [2024-07-26 01:16:43.794399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.505 [2024-07-26 01:16:43.794425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.505 qpair failed and we were unable to recover it. 00:34:13.505 [2024-07-26 01:16:43.794561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.505 [2024-07-26 01:16:43.794587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.505 qpair failed and we were unable to recover it. 00:34:13.505 [2024-07-26 01:16:43.794724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.505 [2024-07-26 01:16:43.794750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.505 qpair failed and we were unable to recover it. 00:34:13.505 [2024-07-26 01:16:43.794914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.505 [2024-07-26 01:16:43.794941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.505 qpair failed and we were unable to recover it. 00:34:13.505 [2024-07-26 01:16:43.795046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.505 [2024-07-26 01:16:43.795081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.505 qpair failed and we were unable to recover it. 00:34:13.505 [2024-07-26 01:16:43.795188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.505 [2024-07-26 01:16:43.795213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.505 qpair failed and we were unable to recover it. 00:34:13.505 [2024-07-26 01:16:43.795347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.505 [2024-07-26 01:16:43.795372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.505 qpair failed and we were unable to recover it. 00:34:13.505 [2024-07-26 01:16:43.795487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.505 [2024-07-26 01:16:43.795513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.505 qpair failed and we were unable to recover it. 00:34:13.505 [2024-07-26 01:16:43.795616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.505 [2024-07-26 01:16:43.795642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.505 qpair failed and we were unable to recover it. 00:34:13.505 [2024-07-26 01:16:43.795747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.505 [2024-07-26 01:16:43.795774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.505 qpair failed and we were unable to recover it. 00:34:13.505 [2024-07-26 01:16:43.795911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.505 [2024-07-26 01:16:43.795938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.505 qpair failed and we were unable to recover it. 00:34:13.505 [2024-07-26 01:16:43.796050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.505 [2024-07-26 01:16:43.796084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.506 qpair failed and we were unable to recover it. 00:34:13.506 [2024-07-26 01:16:43.796221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.506 [2024-07-26 01:16:43.796247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.506 qpair failed and we were unable to recover it. 00:34:13.506 [2024-07-26 01:16:43.796352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.506 [2024-07-26 01:16:43.796378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.506 qpair failed and we were unable to recover it. 00:34:13.506 [2024-07-26 01:16:43.796520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.506 [2024-07-26 01:16:43.796546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.506 qpair failed and we were unable to recover it. 00:34:13.506 [2024-07-26 01:16:43.796674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.506 [2024-07-26 01:16:43.796699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.506 qpair failed and we were unable to recover it. 00:34:13.506 [2024-07-26 01:16:43.796804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.506 [2024-07-26 01:16:43.796830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.506 qpair failed and we were unable to recover it. 00:34:13.506 [2024-07-26 01:16:43.796970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.506 [2024-07-26 01:16:43.796996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.506 qpair failed and we were unable to recover it. 00:34:13.506 [2024-07-26 01:16:43.797143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.506 [2024-07-26 01:16:43.797170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.506 qpair failed and we were unable to recover it. 00:34:13.506 [2024-07-26 01:16:43.797327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.506 [2024-07-26 01:16:43.797352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.506 qpair failed and we were unable to recover it. 00:34:13.506 [2024-07-26 01:16:43.797457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.506 [2024-07-26 01:16:43.797489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.506 qpair failed and we were unable to recover it. 00:34:13.506 [2024-07-26 01:16:43.797619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.506 [2024-07-26 01:16:43.797644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.506 qpair failed and we were unable to recover it. 00:34:13.506 [2024-07-26 01:16:43.797766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.506 [2024-07-26 01:16:43.797791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.506 qpair failed and we were unable to recover it. 00:34:13.506 [2024-07-26 01:16:43.797926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.506 [2024-07-26 01:16:43.797951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.506 qpair failed and we were unable to recover it. 00:34:13.506 [2024-07-26 01:16:43.798089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.506 [2024-07-26 01:16:43.798115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.506 qpair failed and we were unable to recover it. 00:34:13.506 [2024-07-26 01:16:43.798215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.506 [2024-07-26 01:16:43.798240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.506 qpair failed and we were unable to recover it. 00:34:13.506 [2024-07-26 01:16:43.798369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.506 [2024-07-26 01:16:43.798394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.506 qpair failed and we were unable to recover it. 00:34:13.506 [2024-07-26 01:16:43.798556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.506 [2024-07-26 01:16:43.798583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.506 qpair failed and we were unable to recover it. 00:34:13.506 [2024-07-26 01:16:43.798720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.506 [2024-07-26 01:16:43.798746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.506 qpair failed and we were unable to recover it. 00:34:13.506 [2024-07-26 01:16:43.798847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.506 [2024-07-26 01:16:43.798873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.506 qpair failed and we were unable to recover it. 00:34:13.506 [2024-07-26 01:16:43.798990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.506 [2024-07-26 01:16:43.799015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.506 qpair failed and we were unable to recover it. 00:34:13.506 [2024-07-26 01:16:43.799140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.506 [2024-07-26 01:16:43.799167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.506 qpair failed and we were unable to recover it. 00:34:13.506 [2024-07-26 01:16:43.799307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.506 [2024-07-26 01:16:43.799333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.506 qpair failed and we were unable to recover it. 00:34:13.506 [2024-07-26 01:16:43.799469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.506 [2024-07-26 01:16:43.799494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.506 qpair failed and we were unable to recover it. 00:34:13.506 [2024-07-26 01:16:43.799637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.506 [2024-07-26 01:16:43.799663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.506 qpair failed and we were unable to recover it. 00:34:13.506 [2024-07-26 01:16:43.799768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.506 [2024-07-26 01:16:43.799795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.506 qpair failed and we were unable to recover it. 00:34:13.506 [2024-07-26 01:16:43.799965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.506 [2024-07-26 01:16:43.799992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.506 qpair failed and we were unable to recover it. 00:34:13.506 [2024-07-26 01:16:43.800099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.506 [2024-07-26 01:16:43.800125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.506 qpair failed and we were unable to recover it. 00:34:13.506 [2024-07-26 01:16:43.800238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.506 [2024-07-26 01:16:43.800264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.506 qpair failed and we were unable to recover it. 00:34:13.506 [2024-07-26 01:16:43.800370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.506 [2024-07-26 01:16:43.800394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.506 qpair failed and we were unable to recover it. 00:34:13.506 [2024-07-26 01:16:43.800527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.506 [2024-07-26 01:16:43.800551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.506 qpair failed and we were unable to recover it. 00:34:13.506 [2024-07-26 01:16:43.800683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.506 [2024-07-26 01:16:43.800709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.506 qpair failed and we were unable to recover it. 00:34:13.506 [2024-07-26 01:16:43.800844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.506 [2024-07-26 01:16:43.800870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.506 qpair failed and we were unable to recover it. 00:34:13.506 [2024-07-26 01:16:43.801001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.506 [2024-07-26 01:16:43.801027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.506 qpair failed and we were unable to recover it. 00:34:13.506 [2024-07-26 01:16:43.801165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.506 [2024-07-26 01:16:43.801191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.506 qpair failed and we were unable to recover it. 00:34:13.506 [2024-07-26 01:16:43.801299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.506 [2024-07-26 01:16:43.801324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.506 qpair failed and we were unable to recover it. 00:34:13.506 [2024-07-26 01:16:43.801449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.506 [2024-07-26 01:16:43.801475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.506 qpair failed and we were unable to recover it. 00:34:13.506 [2024-07-26 01:16:43.801614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.506 [2024-07-26 01:16:43.801645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.506 qpair failed and we were unable to recover it. 00:34:13.506 [2024-07-26 01:16:43.801784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.506 [2024-07-26 01:16:43.801810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.506 qpair failed and we were unable to recover it. 00:34:13.506 [2024-07-26 01:16:43.801950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.507 [2024-07-26 01:16:43.801976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.507 qpair failed and we were unable to recover it. 00:34:13.507 [2024-07-26 01:16:43.802085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.507 [2024-07-26 01:16:43.802112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.507 qpair failed and we were unable to recover it. 00:34:13.507 [2024-07-26 01:16:43.802247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.507 [2024-07-26 01:16:43.802271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.507 qpair failed and we were unable to recover it. 00:34:13.507 [2024-07-26 01:16:43.802406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.507 [2024-07-26 01:16:43.802433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.507 qpair failed and we were unable to recover it. 00:34:13.507 [2024-07-26 01:16:43.802571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.507 [2024-07-26 01:16:43.802597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.507 qpair failed and we were unable to recover it. 00:34:13.507 [2024-07-26 01:16:43.802703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.507 [2024-07-26 01:16:43.802727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.507 qpair failed and we were unable to recover it. 00:34:13.507 [2024-07-26 01:16:43.802888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.507 [2024-07-26 01:16:43.802913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.507 qpair failed and we were unable to recover it. 00:34:13.507 [2024-07-26 01:16:43.803045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.507 [2024-07-26 01:16:43.803076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.507 qpair failed and we were unable to recover it. 00:34:13.507 [2024-07-26 01:16:43.803209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.507 [2024-07-26 01:16:43.803234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.507 qpair failed and we were unable to recover it. 00:34:13.507 [2024-07-26 01:16:43.803370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.507 [2024-07-26 01:16:43.803395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.507 qpair failed and we were unable to recover it. 00:34:13.507 [2024-07-26 01:16:43.803532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.507 [2024-07-26 01:16:43.803557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.507 qpair failed and we were unable to recover it. 00:34:13.507 [2024-07-26 01:16:43.803728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.507 [2024-07-26 01:16:43.803753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.507 qpair failed and we were unable to recover it. 00:34:13.507 [2024-07-26 01:16:43.803867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.507 [2024-07-26 01:16:43.803892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.507 qpair failed and we were unable to recover it. 00:34:13.507 [2024-07-26 01:16:43.804026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.507 [2024-07-26 01:16:43.804051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.507 qpair failed and we were unable to recover it. 00:34:13.507 [2024-07-26 01:16:43.804167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.507 [2024-07-26 01:16:43.804192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.507 qpair failed and we were unable to recover it. 00:34:13.507 [2024-07-26 01:16:43.804326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.507 [2024-07-26 01:16:43.804351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.507 qpair failed and we were unable to recover it. 00:34:13.507 [2024-07-26 01:16:43.804469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.507 [2024-07-26 01:16:43.804494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.507 qpair failed and we were unable to recover it. 00:34:13.507 [2024-07-26 01:16:43.804598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.507 [2024-07-26 01:16:43.804623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.507 qpair failed and we were unable to recover it. 00:34:13.507 [2024-07-26 01:16:43.804759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.507 [2024-07-26 01:16:43.804785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.507 qpair failed and we were unable to recover it. 00:34:13.507 [2024-07-26 01:16:43.804918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.507 [2024-07-26 01:16:43.804944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.507 qpair failed and we were unable to recover it. 00:34:13.507 [2024-07-26 01:16:43.805080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.507 [2024-07-26 01:16:43.805106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.507 qpair failed and we were unable to recover it. 00:34:13.507 [2024-07-26 01:16:43.805254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.507 [2024-07-26 01:16:43.805280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.507 qpair failed and we were unable to recover it. 00:34:13.507 [2024-07-26 01:16:43.805416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.507 [2024-07-26 01:16:43.805441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.507 qpair failed and we were unable to recover it. 00:34:13.507 [2024-07-26 01:16:43.805576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.507 [2024-07-26 01:16:43.805601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.507 qpair failed and we were unable to recover it. 00:34:13.507 [2024-07-26 01:16:43.805761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.507 [2024-07-26 01:16:43.805786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.507 qpair failed and we were unable to recover it. 00:34:13.507 [2024-07-26 01:16:43.805896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.507 [2024-07-26 01:16:43.805922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.507 qpair failed and we were unable to recover it. 00:34:13.507 [2024-07-26 01:16:43.806085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.507 [2024-07-26 01:16:43.806111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.507 qpair failed and we were unable to recover it. 00:34:13.507 [2024-07-26 01:16:43.806249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.507 [2024-07-26 01:16:43.806274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.507 qpair failed and we were unable to recover it. 00:34:13.507 [2024-07-26 01:16:43.806410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.507 [2024-07-26 01:16:43.806436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.507 qpair failed and we were unable to recover it. 00:34:13.507 [2024-07-26 01:16:43.806552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.507 [2024-07-26 01:16:43.806577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.507 qpair failed and we were unable to recover it. 00:34:13.507 [2024-07-26 01:16:43.806720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.507 [2024-07-26 01:16:43.806745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.507 qpair failed and we were unable to recover it. 00:34:13.507 [2024-07-26 01:16:43.806856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.507 [2024-07-26 01:16:43.806882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.507 qpair failed and we were unable to recover it. 00:34:13.507 [2024-07-26 01:16:43.807013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.507 [2024-07-26 01:16:43.807039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.507 qpair failed and we were unable to recover it. 00:34:13.507 [2024-07-26 01:16:43.807156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.507 [2024-07-26 01:16:43.807181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.507 qpair failed and we were unable to recover it. 00:34:13.507 [2024-07-26 01:16:43.807288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.507 [2024-07-26 01:16:43.807313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.507 qpair failed and we were unable to recover it. 00:34:13.507 [2024-07-26 01:16:43.807424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.507 [2024-07-26 01:16:43.807449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.507 qpair failed and we were unable to recover it. 00:34:13.507 [2024-07-26 01:16:43.807581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.507 [2024-07-26 01:16:43.807606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.507 qpair failed and we were unable to recover it. 00:34:13.507 [2024-07-26 01:16:43.807743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.507 [2024-07-26 01:16:43.807768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.507 qpair failed and we were unable to recover it. 00:34:13.507 [2024-07-26 01:16:43.807903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.508 [2024-07-26 01:16:43.807928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.508 qpair failed and we were unable to recover it. 00:34:13.508 [2024-07-26 01:16:43.808037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.508 [2024-07-26 01:16:43.808067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.508 qpair failed and we were unable to recover it. 00:34:13.508 [2024-07-26 01:16:43.808200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.508 [2024-07-26 01:16:43.808226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.508 qpair failed and we were unable to recover it. 00:34:13.508 [2024-07-26 01:16:43.808360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.508 [2024-07-26 01:16:43.808385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.508 qpair failed and we were unable to recover it. 00:34:13.508 [2024-07-26 01:16:43.808491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.508 [2024-07-26 01:16:43.808516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.508 qpair failed and we were unable to recover it. 00:34:13.508 [2024-07-26 01:16:43.808616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.508 [2024-07-26 01:16:43.808642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.508 qpair failed and we were unable to recover it. 00:34:13.508 [2024-07-26 01:16:43.808743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.508 [2024-07-26 01:16:43.808768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.508 qpair failed and we were unable to recover it. 00:34:13.508 [2024-07-26 01:16:43.808905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.508 [2024-07-26 01:16:43.808930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.508 qpair failed and we were unable to recover it. 00:34:13.508 [2024-07-26 01:16:43.809074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.508 [2024-07-26 01:16:43.809100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.508 qpair failed and we were unable to recover it. 00:34:13.508 [2024-07-26 01:16:43.809207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.508 [2024-07-26 01:16:43.809232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.508 qpair failed and we were unable to recover it. 00:34:13.508 [2024-07-26 01:16:43.809359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.508 [2024-07-26 01:16:43.809384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.508 qpair failed and we were unable to recover it. 00:34:13.508 [2024-07-26 01:16:43.809521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.508 [2024-07-26 01:16:43.809546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.508 qpair failed and we were unable to recover it. 00:34:13.508 [2024-07-26 01:16:43.809679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.508 [2024-07-26 01:16:43.809704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.508 qpair failed and we were unable to recover it. 00:34:13.508 [2024-07-26 01:16:43.809804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.508 [2024-07-26 01:16:43.809829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.508 qpair failed and we were unable to recover it. 00:34:13.508 [2024-07-26 01:16:43.809938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.508 [2024-07-26 01:16:43.809964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.508 qpair failed and we were unable to recover it. 00:34:13.508 [2024-07-26 01:16:43.810099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.508 [2024-07-26 01:16:43.810125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.508 qpair failed and we were unable to recover it. 00:34:13.508 [2024-07-26 01:16:43.810232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.508 [2024-07-26 01:16:43.810258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.508 qpair failed and we were unable to recover it. 00:34:13.508 [2024-07-26 01:16:43.810422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.508 [2024-07-26 01:16:43.810447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.508 qpair failed and we were unable to recover it. 00:34:13.508 [2024-07-26 01:16:43.810544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.508 [2024-07-26 01:16:43.810569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.508 qpair failed and we were unable to recover it. 00:34:13.508 [2024-07-26 01:16:43.810706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.508 [2024-07-26 01:16:43.810731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.508 qpair failed and we were unable to recover it. 00:34:13.508 [2024-07-26 01:16:43.810877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.508 [2024-07-26 01:16:43.810903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.508 qpair failed and we were unable to recover it. 00:34:13.508 [2024-07-26 01:16:43.811009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.508 [2024-07-26 01:16:43.811034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.508 qpair failed and we were unable to recover it. 00:34:13.508 [2024-07-26 01:16:43.811185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.508 [2024-07-26 01:16:43.811210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.508 qpair failed and we were unable to recover it. 00:34:13.508 [2024-07-26 01:16:43.811321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.508 [2024-07-26 01:16:43.811347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.508 qpair failed and we were unable to recover it. 00:34:13.508 [2024-07-26 01:16:43.811512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.508 [2024-07-26 01:16:43.811537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.508 qpair failed and we were unable to recover it. 00:34:13.508 [2024-07-26 01:16:43.811698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.508 [2024-07-26 01:16:43.811723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.508 qpair failed and we were unable to recover it. 00:34:13.508 [2024-07-26 01:16:43.811830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.508 [2024-07-26 01:16:43.811855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.508 qpair failed and we were unable to recover it. 00:34:13.508 [2024-07-26 01:16:43.811992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.508 [2024-07-26 01:16:43.812017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.508 qpair failed and we were unable to recover it. 00:34:13.508 [2024-07-26 01:16:43.812139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.508 [2024-07-26 01:16:43.812169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.508 qpair failed and we were unable to recover it. 00:34:13.508 [2024-07-26 01:16:43.812305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.508 [2024-07-26 01:16:43.812330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.508 qpair failed and we were unable to recover it. 00:34:13.508 [2024-07-26 01:16:43.812440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.508 [2024-07-26 01:16:43.812464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.508 qpair failed and we were unable to recover it. 00:34:13.508 [2024-07-26 01:16:43.812572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.508 [2024-07-26 01:16:43.812597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.508 qpair failed and we were unable to recover it. 00:34:13.508 [2024-07-26 01:16:43.812730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.508 [2024-07-26 01:16:43.812755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.508 qpair failed and we were unable to recover it. 00:34:13.508 [2024-07-26 01:16:43.812899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.508 [2024-07-26 01:16:43.812938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.508 qpair failed and we were unable to recover it. 00:34:13.508 [2024-07-26 01:16:43.813106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.508 [2024-07-26 01:16:43.813134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.508 qpair failed and we were unable to recover it. 00:34:13.508 [2024-07-26 01:16:43.813275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.508 [2024-07-26 01:16:43.813302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.508 qpair failed and we were unable to recover it. 00:34:13.508 [2024-07-26 01:16:43.813442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.508 [2024-07-26 01:16:43.813467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.508 qpair failed and we were unable to recover it. 00:34:13.508 [2024-07-26 01:16:43.813575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.508 [2024-07-26 01:16:43.813601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.508 qpair failed and we were unable to recover it. 00:34:13.509 [2024-07-26 01:16:43.813763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.509 [2024-07-26 01:16:43.813789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.509 qpair failed and we were unable to recover it. 00:34:13.509 [2024-07-26 01:16:43.813887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.509 [2024-07-26 01:16:43.813913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.509 qpair failed and we were unable to recover it. 00:34:13.509 [2024-07-26 01:16:43.814024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.509 [2024-07-26 01:16:43.814050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.509 qpair failed and we were unable to recover it. 00:34:13.509 [2024-07-26 01:16:43.814193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.509 [2024-07-26 01:16:43.814219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.509 qpair failed and we were unable to recover it. 00:34:13.509 [2024-07-26 01:16:43.814336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.509 [2024-07-26 01:16:43.814363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.509 qpair failed and we were unable to recover it. 00:34:13.509 [2024-07-26 01:16:43.814523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.509 [2024-07-26 01:16:43.814549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.509 qpair failed and we were unable to recover it. 00:34:13.509 [2024-07-26 01:16:43.814712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.509 [2024-07-26 01:16:43.814738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.509 qpair failed and we were unable to recover it. 00:34:13.509 [2024-07-26 01:16:43.814865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.509 [2024-07-26 01:16:43.814890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.509 qpair failed and we were unable to recover it. 00:34:13.509 [2024-07-26 01:16:43.815024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.509 [2024-07-26 01:16:43.815049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.509 qpair failed and we were unable to recover it. 00:34:13.509 [2024-07-26 01:16:43.815197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.509 [2024-07-26 01:16:43.815223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.509 qpair failed and we were unable to recover it. 00:34:13.509 [2024-07-26 01:16:43.815328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.509 [2024-07-26 01:16:43.815355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.509 qpair failed and we were unable to recover it. 00:34:13.509 [2024-07-26 01:16:43.815465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.509 [2024-07-26 01:16:43.815490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.509 qpair failed and we were unable to recover it. 00:34:13.509 [2024-07-26 01:16:43.815601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.509 [2024-07-26 01:16:43.815627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.509 qpair failed and we were unable to recover it. 00:34:13.509 [2024-07-26 01:16:43.815759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.509 [2024-07-26 01:16:43.815784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.509 qpair failed and we were unable to recover it. 00:34:13.509 [2024-07-26 01:16:43.815883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.509 [2024-07-26 01:16:43.815909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.509 qpair failed and we were unable to recover it. 00:34:13.509 [2024-07-26 01:16:43.816046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.509 [2024-07-26 01:16:43.816079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.509 qpair failed and we were unable to recover it. 00:34:13.509 [2024-07-26 01:16:43.816186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.509 [2024-07-26 01:16:43.816213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.509 qpair failed and we were unable to recover it. 00:34:13.509 [2024-07-26 01:16:43.816389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.509 [2024-07-26 01:16:43.816428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.509 qpair failed and we were unable to recover it. 00:34:13.509 [2024-07-26 01:16:43.816560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.509 [2024-07-26 01:16:43.816588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.509 qpair failed and we were unable to recover it. 00:34:13.509 [2024-07-26 01:16:43.816720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.509 [2024-07-26 01:16:43.816746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.509 qpair failed and we were unable to recover it. 00:34:13.509 [2024-07-26 01:16:43.816859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.509 [2024-07-26 01:16:43.816885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.509 qpair failed and we were unable to recover it. 00:34:13.509 [2024-07-26 01:16:43.816991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.509 [2024-07-26 01:16:43.817017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.509 qpair failed and we were unable to recover it. 00:34:13.509 [2024-07-26 01:16:43.817178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.509 [2024-07-26 01:16:43.817217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.509 qpair failed and we were unable to recover it. 00:34:13.509 [2024-07-26 01:16:43.817362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.509 [2024-07-26 01:16:43.817389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.509 qpair failed and we were unable to recover it. 00:34:13.509 [2024-07-26 01:16:43.817535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.509 [2024-07-26 01:16:43.817561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.509 qpair failed and we were unable to recover it. 00:34:13.509 [2024-07-26 01:16:43.817696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.509 [2024-07-26 01:16:43.817721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.509 qpair failed and we were unable to recover it. 00:34:13.509 [2024-07-26 01:16:43.817863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.509 [2024-07-26 01:16:43.817889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.509 qpair failed and we were unable to recover it. 00:34:13.509 [2024-07-26 01:16:43.818002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.509 [2024-07-26 01:16:43.818029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.509 qpair failed and we were unable to recover it. 00:34:13.509 [2024-07-26 01:16:43.818193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.509 [2024-07-26 01:16:43.818219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.509 qpair failed and we were unable to recover it. 00:34:13.509 [2024-07-26 01:16:43.818376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.509 [2024-07-26 01:16:43.818403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.509 qpair failed and we were unable to recover it. 00:34:13.509 [2024-07-26 01:16:43.818541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.509 [2024-07-26 01:16:43.818567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.509 qpair failed and we were unable to recover it. 00:34:13.509 [2024-07-26 01:16:43.818739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.509 [2024-07-26 01:16:43.818765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.509 qpair failed and we were unable to recover it. 00:34:13.509 [2024-07-26 01:16:43.818898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.509 [2024-07-26 01:16:43.818923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.509 qpair failed and we were unable to recover it. 00:34:13.509 [2024-07-26 01:16:43.819031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.510 [2024-07-26 01:16:43.819057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.510 qpair failed and we were unable to recover it. 00:34:13.510 [2024-07-26 01:16:43.819204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.510 [2024-07-26 01:16:43.819230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.510 qpair failed and we were unable to recover it. 00:34:13.510 [2024-07-26 01:16:43.819368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.510 [2024-07-26 01:16:43.819393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.510 qpair failed and we were unable to recover it. 00:34:13.510 [2024-07-26 01:16:43.819508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.510 [2024-07-26 01:16:43.819536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.510 qpair failed and we were unable to recover it. 00:34:13.510 [2024-07-26 01:16:43.819638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.510 [2024-07-26 01:16:43.819664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.510 qpair failed and we were unable to recover it. 00:34:13.510 [2024-07-26 01:16:43.819776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.510 [2024-07-26 01:16:43.819804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.510 qpair failed and we were unable to recover it. 00:34:13.510 [2024-07-26 01:16:43.819967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.510 [2024-07-26 01:16:43.819993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.510 qpair failed and we were unable to recover it. 00:34:13.510 [2024-07-26 01:16:43.820102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.510 [2024-07-26 01:16:43.820128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.510 qpair failed and we were unable to recover it. 00:34:13.510 [2024-07-26 01:16:43.820304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.510 [2024-07-26 01:16:43.820329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.510 qpair failed and we were unable to recover it. 00:34:13.510 [2024-07-26 01:16:43.820470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.510 [2024-07-26 01:16:43.820496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.510 qpair failed and we were unable to recover it. 00:34:13.510 [2024-07-26 01:16:43.820624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.510 [2024-07-26 01:16:43.820649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.510 qpair failed and we were unable to recover it. 00:34:13.510 [2024-07-26 01:16:43.820817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.510 [2024-07-26 01:16:43.820842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.510 qpair failed and we were unable to recover it. 00:34:13.510 [2024-07-26 01:16:43.820950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.510 [2024-07-26 01:16:43.820975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.510 qpair failed and we were unable to recover it. 00:34:13.510 [2024-07-26 01:16:43.821106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.510 [2024-07-26 01:16:43.821132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.510 qpair failed and we were unable to recover it. 00:34:13.510 [2024-07-26 01:16:43.821238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.510 [2024-07-26 01:16:43.821263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.510 qpair failed and we were unable to recover it. 00:34:13.510 [2024-07-26 01:16:43.821379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.510 [2024-07-26 01:16:43.821404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.510 qpair failed and we were unable to recover it. 00:34:13.510 [2024-07-26 01:16:43.821535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.510 [2024-07-26 01:16:43.821560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.510 qpair failed and we were unable to recover it. 00:34:13.510 [2024-07-26 01:16:43.821696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.510 [2024-07-26 01:16:43.821723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.510 qpair failed and we were unable to recover it. 00:34:13.510 [2024-07-26 01:16:43.821865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.510 [2024-07-26 01:16:43.821890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.510 qpair failed and we were unable to recover it. 00:34:13.510 [2024-07-26 01:16:43.821991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.510 [2024-07-26 01:16:43.822016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.510 qpair failed and we were unable to recover it. 00:34:13.510 [2024-07-26 01:16:43.822138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.510 [2024-07-26 01:16:43.822167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.510 qpair failed and we were unable to recover it. 00:34:13.510 [2024-07-26 01:16:43.822309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.510 [2024-07-26 01:16:43.822336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.510 qpair failed and we were unable to recover it. 00:34:13.510 [2024-07-26 01:16:43.822436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.510 [2024-07-26 01:16:43.822462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.510 qpair failed and we were unable to recover it. 00:34:13.510 [2024-07-26 01:16:43.822598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.510 [2024-07-26 01:16:43.822624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.510 qpair failed and we were unable to recover it. 00:34:13.510 [2024-07-26 01:16:43.822763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.510 [2024-07-26 01:16:43.822793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.510 qpair failed and we were unable to recover it. 00:34:13.510 [2024-07-26 01:16:43.822929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.510 [2024-07-26 01:16:43.822955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.510 qpair failed and we were unable to recover it. 00:34:13.510 [2024-07-26 01:16:43.823090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.510 [2024-07-26 01:16:43.823116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.510 qpair failed and we were unable to recover it. 00:34:13.510 [2024-07-26 01:16:43.823257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.510 [2024-07-26 01:16:43.823282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.510 qpair failed and we were unable to recover it. 00:34:13.510 [2024-07-26 01:16:43.823437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.510 [2024-07-26 01:16:43.823462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.510 qpair failed and we were unable to recover it. 00:34:13.510 [2024-07-26 01:16:43.823621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.510 [2024-07-26 01:16:43.823647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.510 qpair failed and we were unable to recover it. 00:34:13.510 [2024-07-26 01:16:43.823782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.510 [2024-07-26 01:16:43.823808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.510 qpair failed and we were unable to recover it. 00:34:13.510 [2024-07-26 01:16:43.823941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.510 [2024-07-26 01:16:43.823966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.510 qpair failed and we were unable to recover it. 00:34:13.510 [2024-07-26 01:16:43.824097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.510 [2024-07-26 01:16:43.824123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.510 qpair failed and we were unable to recover it. 00:34:13.510 [2024-07-26 01:16:43.824263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.510 [2024-07-26 01:16:43.824289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.510 qpair failed and we were unable to recover it. 00:34:13.510 [2024-07-26 01:16:43.824398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.510 [2024-07-26 01:16:43.824423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.510 qpair failed and we were unable to recover it. 00:34:13.510 [2024-07-26 01:16:43.824560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.510 [2024-07-26 01:16:43.824586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.510 qpair failed and we were unable to recover it. 00:34:13.510 [2024-07-26 01:16:43.824686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.510 [2024-07-26 01:16:43.824712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.510 qpair failed and we were unable to recover it. 00:34:13.510 [2024-07-26 01:16:43.824843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.511 [2024-07-26 01:16:43.824868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.511 qpair failed and we were unable to recover it. 00:34:13.511 [2024-07-26 01:16:43.825009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.511 [2024-07-26 01:16:43.825037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.511 qpair failed and we were unable to recover it. 00:34:13.511 [2024-07-26 01:16:43.825156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.511 [2024-07-26 01:16:43.825181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.511 qpair failed and we were unable to recover it. 00:34:13.511 [2024-07-26 01:16:43.825320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.511 [2024-07-26 01:16:43.825345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.511 qpair failed and we were unable to recover it. 00:34:13.511 [2024-07-26 01:16:43.825482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.511 [2024-07-26 01:16:43.825508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.511 qpair failed and we were unable to recover it. 00:34:13.511 [2024-07-26 01:16:43.825641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.511 [2024-07-26 01:16:43.825667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.511 qpair failed and we were unable to recover it. 00:34:13.511 [2024-07-26 01:16:43.825777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.511 [2024-07-26 01:16:43.825803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.511 qpair failed and we were unable to recover it. 00:34:13.511 [2024-07-26 01:16:43.825968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.511 [2024-07-26 01:16:43.825993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.511 qpair failed and we were unable to recover it. 00:34:13.511 [2024-07-26 01:16:43.826158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.511 [2024-07-26 01:16:43.826184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.511 qpair failed and we were unable to recover it. 00:34:13.511 [2024-07-26 01:16:43.826303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.511 [2024-07-26 01:16:43.826342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.511 qpair failed and we were unable to recover it. 00:34:13.511 [2024-07-26 01:16:43.826509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.511 [2024-07-26 01:16:43.826536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.511 qpair failed and we were unable to recover it. 00:34:13.511 [2024-07-26 01:16:43.826652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.511 [2024-07-26 01:16:43.826679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.511 qpair failed and we were unable to recover it. 00:34:13.511 [2024-07-26 01:16:43.826815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.511 [2024-07-26 01:16:43.826842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.511 qpair failed and we were unable to recover it. 00:34:13.511 [2024-07-26 01:16:43.826981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.511 [2024-07-26 01:16:43.827007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.511 qpair failed and we were unable to recover it. 00:34:13.511 [2024-07-26 01:16:43.827118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.511 [2024-07-26 01:16:43.827150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.511 qpair failed and we were unable to recover it. 00:34:13.511 [2024-07-26 01:16:43.827283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.511 [2024-07-26 01:16:43.827309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.511 qpair failed and we were unable to recover it. 00:34:13.511 [2024-07-26 01:16:43.827421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.511 [2024-07-26 01:16:43.827448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.511 qpair failed and we were unable to recover it. 00:34:13.511 [2024-07-26 01:16:43.827585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.511 [2024-07-26 01:16:43.827612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.511 qpair failed and we were unable to recover it. 00:34:13.511 [2024-07-26 01:16:43.827748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.511 [2024-07-26 01:16:43.827774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.511 qpair failed and we were unable to recover it. 00:34:13.511 [2024-07-26 01:16:43.827931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.511 [2024-07-26 01:16:43.827957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.511 qpair failed and we were unable to recover it. 00:34:13.511 [2024-07-26 01:16:43.828094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.511 [2024-07-26 01:16:43.828121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.511 qpair failed and we were unable to recover it. 00:34:13.511 [2024-07-26 01:16:43.828264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.511 [2024-07-26 01:16:43.828290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.511 qpair failed and we were unable to recover it. 00:34:13.511 [2024-07-26 01:16:43.828422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.511 [2024-07-26 01:16:43.828448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.511 qpair failed and we were unable to recover it. 00:34:13.511 [2024-07-26 01:16:43.828578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.511 [2024-07-26 01:16:43.828604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.511 qpair failed and we were unable to recover it. 00:34:13.511 [2024-07-26 01:16:43.828747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.511 [2024-07-26 01:16:43.828773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.511 qpair failed and we were unable to recover it. 00:34:13.511 [2024-07-26 01:16:43.828905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.511 [2024-07-26 01:16:43.828930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.511 qpair failed and we were unable to recover it. 00:34:13.511 [2024-07-26 01:16:43.829048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.511 [2024-07-26 01:16:43.829080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.511 qpair failed and we were unable to recover it. 00:34:13.511 [2024-07-26 01:16:43.829248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.511 [2024-07-26 01:16:43.829273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.511 qpair failed and we were unable to recover it. 00:34:13.511 [2024-07-26 01:16:43.829416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.511 [2024-07-26 01:16:43.829442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.511 qpair failed and we were unable to recover it. 00:34:13.511 [2024-07-26 01:16:43.829550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.511 [2024-07-26 01:16:43.829576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.511 qpair failed and we were unable to recover it. 00:34:13.511 [2024-07-26 01:16:43.829737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.511 [2024-07-26 01:16:43.829763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.511 qpair failed and we were unable to recover it. 00:34:13.511 [2024-07-26 01:16:43.829900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.511 [2024-07-26 01:16:43.829926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.511 qpair failed and we were unable to recover it. 00:34:13.511 [2024-07-26 01:16:43.830091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.511 [2024-07-26 01:16:43.830118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.511 qpair failed and we were unable to recover it. 00:34:13.511 [2024-07-26 01:16:43.830256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.511 [2024-07-26 01:16:43.830282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.511 qpair failed and we were unable to recover it. 00:34:13.511 [2024-07-26 01:16:43.830401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.511 [2024-07-26 01:16:43.830426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.511 qpair failed and we were unable to recover it. 00:34:13.511 [2024-07-26 01:16:43.830563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.511 [2024-07-26 01:16:43.830589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.511 qpair failed and we were unable to recover it. 00:34:13.511 [2024-07-26 01:16:43.830725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.511 [2024-07-26 01:16:43.830751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.511 qpair failed and we were unable to recover it. 00:34:13.511 [2024-07-26 01:16:43.830914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.511 [2024-07-26 01:16:43.830940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.511 qpair failed and we were unable to recover it. 00:34:13.512 [2024-07-26 01:16:43.831048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.512 [2024-07-26 01:16:43.831081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.512 qpair failed and we were unable to recover it. 00:34:13.512 [2024-07-26 01:16:43.831216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.512 [2024-07-26 01:16:43.831242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.512 qpair failed and we were unable to recover it. 00:34:13.512 [2024-07-26 01:16:43.831376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.512 [2024-07-26 01:16:43.831402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.512 qpair failed and we were unable to recover it. 00:34:13.512 [2024-07-26 01:16:43.831520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.512 [2024-07-26 01:16:43.831546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.512 qpair failed and we were unable to recover it. 00:34:13.512 [2024-07-26 01:16:43.831681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.512 [2024-07-26 01:16:43.831707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.512 qpair failed and we were unable to recover it. 00:34:13.512 [2024-07-26 01:16:43.831865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.512 [2024-07-26 01:16:43.831890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.512 qpair failed and we were unable to recover it. 00:34:13.512 [2024-07-26 01:16:43.832026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.512 [2024-07-26 01:16:43.832052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.512 qpair failed and we were unable to recover it. 00:34:13.512 [2024-07-26 01:16:43.832178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.512 [2024-07-26 01:16:43.832205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.512 qpair failed and we were unable to recover it. 00:34:13.512 [2024-07-26 01:16:43.832313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.512 [2024-07-26 01:16:43.832339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.512 qpair failed and we were unable to recover it. 00:34:13.512 [2024-07-26 01:16:43.832450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.512 [2024-07-26 01:16:43.832477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.512 qpair failed and we were unable to recover it. 00:34:13.512 [2024-07-26 01:16:43.832636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.512 [2024-07-26 01:16:43.832662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.512 qpair failed and we were unable to recover it. 00:34:13.512 [2024-07-26 01:16:43.832792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.512 [2024-07-26 01:16:43.832817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.512 qpair failed and we were unable to recover it. 00:34:13.512 [2024-07-26 01:16:43.832928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.512 [2024-07-26 01:16:43.832955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.512 qpair failed and we were unable to recover it. 00:34:13.512 [2024-07-26 01:16:43.833108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.512 [2024-07-26 01:16:43.833146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.512 qpair failed and we were unable to recover it. 00:34:13.512 [2024-07-26 01:16:43.833269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.512 [2024-07-26 01:16:43.833295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.512 qpair failed and we were unable to recover it. 00:34:13.512 [2024-07-26 01:16:43.833401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.512 [2024-07-26 01:16:43.833427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.512 qpair failed and we were unable to recover it. 00:34:13.512 [2024-07-26 01:16:43.833563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.512 [2024-07-26 01:16:43.833594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.512 qpair failed and we were unable to recover it. 00:34:13.512 [2024-07-26 01:16:43.833732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.512 [2024-07-26 01:16:43.833757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.512 qpair failed and we were unable to recover it. 00:34:13.512 [2024-07-26 01:16:43.833872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.512 [2024-07-26 01:16:43.833898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.512 qpair failed and we were unable to recover it. 00:34:13.512 [2024-07-26 01:16:43.834035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.512 [2024-07-26 01:16:43.834069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.512 qpair failed and we were unable to recover it. 00:34:13.512 [2024-07-26 01:16:43.834221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.512 [2024-07-26 01:16:43.834260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.512 qpair failed and we were unable to recover it. 00:34:13.512 [2024-07-26 01:16:43.834392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.512 [2024-07-26 01:16:43.834421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.512 qpair failed and we were unable to recover it. 00:34:13.512 [2024-07-26 01:16:43.834583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.512 [2024-07-26 01:16:43.834609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.512 qpair failed and we were unable to recover it. 00:34:13.512 [2024-07-26 01:16:43.834769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.512 [2024-07-26 01:16:43.834795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.512 qpair failed and we were unable to recover it. 00:34:13.512 [2024-07-26 01:16:43.834900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.512 [2024-07-26 01:16:43.834926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.512 qpair failed and we were unable to recover it. 00:34:13.512 [2024-07-26 01:16:43.835023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.512 [2024-07-26 01:16:43.835048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.512 qpair failed and we were unable to recover it. 00:34:13.512 [2024-07-26 01:16:43.835208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.512 [2024-07-26 01:16:43.835235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.512 qpair failed and we were unable to recover it. 00:34:13.512 [2024-07-26 01:16:43.835353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.512 [2024-07-26 01:16:43.835379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.512 qpair failed and we were unable to recover it. 00:34:13.512 [2024-07-26 01:16:43.835519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.512 [2024-07-26 01:16:43.835545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.512 qpair failed and we were unable to recover it. 00:34:13.512 [2024-07-26 01:16:43.835678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.512 [2024-07-26 01:16:43.835704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.512 qpair failed and we were unable to recover it. 00:34:13.512 [2024-07-26 01:16:43.835845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.512 [2024-07-26 01:16:43.835871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.512 qpair failed and we were unable to recover it. 00:34:13.512 [2024-07-26 01:16:43.835972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.512 [2024-07-26 01:16:43.835998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.512 qpair failed and we were unable to recover it. 00:34:13.512 [2024-07-26 01:16:43.836118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.512 [2024-07-26 01:16:43.836146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.512 qpair failed and we were unable to recover it. 00:34:13.512 [2024-07-26 01:16:43.836261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.512 [2024-07-26 01:16:43.836287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.512 qpair failed and we were unable to recover it. 00:34:13.512 [2024-07-26 01:16:43.836398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.512 [2024-07-26 01:16:43.836424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.512 qpair failed and we were unable to recover it. 00:34:13.512 [2024-07-26 01:16:43.836552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.512 [2024-07-26 01:16:43.836577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.512 qpair failed and we were unable to recover it. 00:34:13.512 [2024-07-26 01:16:43.836736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.512 [2024-07-26 01:16:43.836762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.512 qpair failed and we were unable to recover it. 00:34:13.512 [2024-07-26 01:16:43.836892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.513 [2024-07-26 01:16:43.836918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.513 qpair failed and we were unable to recover it. 00:34:13.513 [2024-07-26 01:16:43.837056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.513 [2024-07-26 01:16:43.837088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.513 qpair failed and we were unable to recover it. 00:34:13.513 [2024-07-26 01:16:43.837225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.513 [2024-07-26 01:16:43.837253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.513 qpair failed and we were unable to recover it. 00:34:13.513 [2024-07-26 01:16:43.837389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.513 [2024-07-26 01:16:43.837415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.513 qpair failed and we were unable to recover it. 00:34:13.513 [2024-07-26 01:16:43.837547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.513 [2024-07-26 01:16:43.837573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.513 qpair failed and we were unable to recover it. 00:34:13.513 [2024-07-26 01:16:43.837743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.513 [2024-07-26 01:16:43.837769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.513 qpair failed and we were unable to recover it. 00:34:13.513 [2024-07-26 01:16:43.837890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.513 [2024-07-26 01:16:43.837929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.513 qpair failed and we were unable to recover it. 00:34:13.513 [2024-07-26 01:16:43.838099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.513 [2024-07-26 01:16:43.838127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.513 qpair failed and we were unable to recover it. 00:34:13.513 [2024-07-26 01:16:43.838263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.513 [2024-07-26 01:16:43.838289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.513 qpair failed and we were unable to recover it. 00:34:13.513 [2024-07-26 01:16:43.838424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.513 [2024-07-26 01:16:43.838450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.513 qpair failed and we were unable to recover it. 00:34:13.513 [2024-07-26 01:16:43.838556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.513 [2024-07-26 01:16:43.838582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.513 qpair failed and we were unable to recover it. 00:34:13.513 [2024-07-26 01:16:43.838712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.513 [2024-07-26 01:16:43.838739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.513 qpair failed and we were unable to recover it. 00:34:13.513 [2024-07-26 01:16:43.838851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.513 [2024-07-26 01:16:43.838878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.513 qpair failed and we were unable to recover it. 00:34:13.513 [2024-07-26 01:16:43.839009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.513 [2024-07-26 01:16:43.839035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.513 qpair failed and we were unable to recover it. 00:34:13.513 [2024-07-26 01:16:43.839162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.513 [2024-07-26 01:16:43.839189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.513 qpair failed and we were unable to recover it. 00:34:13.513 [2024-07-26 01:16:43.839325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.513 [2024-07-26 01:16:43.839350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.513 qpair failed and we were unable to recover it. 00:34:13.513 [2024-07-26 01:16:43.839458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.513 [2024-07-26 01:16:43.839484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.513 qpair failed and we were unable to recover it. 00:34:13.513 [2024-07-26 01:16:43.839594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.513 [2024-07-26 01:16:43.839621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.513 qpair failed and we were unable to recover it. 00:34:13.513 [2024-07-26 01:16:43.839759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.513 [2024-07-26 01:16:43.839785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.513 qpair failed and we were unable to recover it. 00:34:13.513 [2024-07-26 01:16:43.839895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.513 [2024-07-26 01:16:43.839921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.513 qpair failed and we were unable to recover it. 00:34:13.513 [2024-07-26 01:16:43.840036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.513 [2024-07-26 01:16:43.840068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.513 qpair failed and we were unable to recover it. 00:34:13.513 [2024-07-26 01:16:43.840206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.513 [2024-07-26 01:16:43.840232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.513 qpair failed and we were unable to recover it. 00:34:13.513 [2024-07-26 01:16:43.840337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.513 [2024-07-26 01:16:43.840362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.513 qpair failed and we were unable to recover it. 00:34:13.513 [2024-07-26 01:16:43.840503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.513 [2024-07-26 01:16:43.840529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.513 qpair failed and we were unable to recover it. 00:34:13.513 [2024-07-26 01:16:43.840663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.513 [2024-07-26 01:16:43.840690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.513 qpair failed and we were unable to recover it. 00:34:13.513 [2024-07-26 01:16:43.840828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.513 [2024-07-26 01:16:43.840853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.513 qpair failed and we were unable to recover it. 00:34:13.513 [2024-07-26 01:16:43.840969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.513 [2024-07-26 01:16:43.840995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.513 qpair failed and we were unable to recover it. 00:34:13.513 [2024-07-26 01:16:43.841104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.513 [2024-07-26 01:16:43.841130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.513 qpair failed and we were unable to recover it. 00:34:13.513 [2024-07-26 01:16:43.841291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.513 [2024-07-26 01:16:43.841317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.513 qpair failed and we were unable to recover it. 00:34:13.513 [2024-07-26 01:16:43.841427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.513 [2024-07-26 01:16:43.841454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.513 qpair failed and we were unable to recover it. 00:34:13.513 [2024-07-26 01:16:43.841561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.513 [2024-07-26 01:16:43.841588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.513 qpair failed and we were unable to recover it. 00:34:13.513 [2024-07-26 01:16:43.841722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.513 [2024-07-26 01:16:43.841748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.513 qpair failed and we were unable to recover it. 00:34:13.513 [2024-07-26 01:16:43.841880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.513 [2024-07-26 01:16:43.841906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.513 qpair failed and we were unable to recover it. 00:34:13.513 [2024-07-26 01:16:43.842011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.514 [2024-07-26 01:16:43.842037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.514 qpair failed and we were unable to recover it. 00:34:13.514 [2024-07-26 01:16:43.842172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.514 [2024-07-26 01:16:43.842210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.514 qpair failed and we were unable to recover it. 00:34:13.514 [2024-07-26 01:16:43.842329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.514 [2024-07-26 01:16:43.842356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.514 qpair failed and we were unable to recover it. 00:34:13.514 [2024-07-26 01:16:43.842488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.514 [2024-07-26 01:16:43.842514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.514 qpair failed and we were unable to recover it. 00:34:13.514 [2024-07-26 01:16:43.842624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.514 [2024-07-26 01:16:43.842649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.514 qpair failed and we were unable to recover it. 00:34:13.514 [2024-07-26 01:16:43.842791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.514 [2024-07-26 01:16:43.842816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.514 qpair failed and we were unable to recover it. 00:34:13.514 [2024-07-26 01:16:43.842913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.514 [2024-07-26 01:16:43.842938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.514 qpair failed and we were unable to recover it. 00:34:13.514 [2024-07-26 01:16:43.843074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.514 [2024-07-26 01:16:43.843100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.514 qpair failed and we were unable to recover it. 00:34:13.514 [2024-07-26 01:16:43.843214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.514 [2024-07-26 01:16:43.843239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.514 qpair failed and we were unable to recover it. 00:34:13.514 [2024-07-26 01:16:43.843371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.514 [2024-07-26 01:16:43.843395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.514 qpair failed and we were unable to recover it. 00:34:13.514 [2024-07-26 01:16:43.843493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.514 [2024-07-26 01:16:43.843518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.514 qpair failed and we were unable to recover it. 00:34:13.514 [2024-07-26 01:16:43.843650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.514 [2024-07-26 01:16:43.843675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.514 qpair failed and we were unable to recover it. 00:34:13.514 [2024-07-26 01:16:43.843808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.514 [2024-07-26 01:16:43.843833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.514 qpair failed and we were unable to recover it. 00:34:13.514 [2024-07-26 01:16:43.843966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.514 [2024-07-26 01:16:43.843996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.514 qpair failed and we were unable to recover it. 00:34:13.514 [2024-07-26 01:16:43.844137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.514 [2024-07-26 01:16:43.844162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.514 qpair failed and we were unable to recover it. 00:34:13.514 [2024-07-26 01:16:43.844265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.514 [2024-07-26 01:16:43.844291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.514 qpair failed and we were unable to recover it. 00:34:13.514 [2024-07-26 01:16:43.844400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.514 [2024-07-26 01:16:43.844427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.514 qpair failed and we were unable to recover it. 00:34:13.514 [2024-07-26 01:16:43.844539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.514 [2024-07-26 01:16:43.844564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.514 qpair failed and we were unable to recover it. 00:34:13.514 [2024-07-26 01:16:43.844693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.514 [2024-07-26 01:16:43.844718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.514 qpair failed and we were unable to recover it. 00:34:13.514 [2024-07-26 01:16:43.844826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.514 [2024-07-26 01:16:43.844854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.514 qpair failed and we were unable to recover it. 00:34:13.514 [2024-07-26 01:16:43.845017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.514 [2024-07-26 01:16:43.845044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.514 qpair failed and we were unable to recover it. 00:34:13.514 [2024-07-26 01:16:43.845157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.514 [2024-07-26 01:16:43.845183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.514 qpair failed and we were unable to recover it. 00:34:13.514 [2024-07-26 01:16:43.845319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.514 [2024-07-26 01:16:43.845346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.514 qpair failed and we were unable to recover it. 00:34:13.514 [2024-07-26 01:16:43.845484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.514 [2024-07-26 01:16:43.845510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.514 qpair failed and we were unable to recover it. 00:34:13.514 [2024-07-26 01:16:43.845639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.514 [2024-07-26 01:16:43.845664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.514 qpair failed and we were unable to recover it. 00:34:13.514 [2024-07-26 01:16:43.845769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.514 [2024-07-26 01:16:43.845795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.514 qpair failed and we were unable to recover it. 00:34:13.514 [2024-07-26 01:16:43.845958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.514 [2024-07-26 01:16:43.845984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.514 qpair failed and we were unable to recover it. 00:34:13.514 [2024-07-26 01:16:43.846155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.514 [2024-07-26 01:16:43.846181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.514 qpair failed and we were unable to recover it. 00:34:13.514 [2024-07-26 01:16:43.846294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.514 [2024-07-26 01:16:43.846320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.514 qpair failed and we were unable to recover it. 00:34:13.514 [2024-07-26 01:16:43.846433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.514 [2024-07-26 01:16:43.846459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.514 qpair failed and we were unable to recover it. 00:34:13.514 [2024-07-26 01:16:43.846564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.514 [2024-07-26 01:16:43.846589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.514 qpair failed and we were unable to recover it. 00:34:13.514 [2024-07-26 01:16:43.846751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.514 [2024-07-26 01:16:43.846777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.514 qpair failed and we were unable to recover it. 00:34:13.514 [2024-07-26 01:16:43.846912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.514 [2024-07-26 01:16:43.846938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.514 qpair failed and we were unable to recover it. 00:34:13.514 [2024-07-26 01:16:43.847095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.514 [2024-07-26 01:16:43.847121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.514 qpair failed and we were unable to recover it. 00:34:13.514 [2024-07-26 01:16:43.847231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.514 [2024-07-26 01:16:43.847257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.514 qpair failed and we were unable to recover it. 00:34:13.514 [2024-07-26 01:16:43.847418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.514 [2024-07-26 01:16:43.847444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.514 qpair failed and we were unable to recover it. 00:34:13.514 [2024-07-26 01:16:43.847580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.514 [2024-07-26 01:16:43.847606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.514 qpair failed and we were unable to recover it. 00:34:13.514 [2024-07-26 01:16:43.847744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.514 [2024-07-26 01:16:43.847769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.515 qpair failed and we were unable to recover it. 00:34:13.515 [2024-07-26 01:16:43.847933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.515 [2024-07-26 01:16:43.847959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.515 qpair failed and we were unable to recover it. 00:34:13.515 [2024-07-26 01:16:43.848082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.515 [2024-07-26 01:16:43.848121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.515 qpair failed and we were unable to recover it. 00:34:13.515 [2024-07-26 01:16:43.848238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.515 [2024-07-26 01:16:43.848270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.515 qpair failed and we were unable to recover it. 00:34:13.515 [2024-07-26 01:16:43.848393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.515 [2024-07-26 01:16:43.848420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.515 qpair failed and we were unable to recover it. 00:34:13.515 [2024-07-26 01:16:43.848559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.515 [2024-07-26 01:16:43.848584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.515 qpair failed and we were unable to recover it. 00:34:13.515 [2024-07-26 01:16:43.848721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.515 [2024-07-26 01:16:43.848748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.515 qpair failed and we were unable to recover it. 00:34:13.515 [2024-07-26 01:16:43.848884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.515 [2024-07-26 01:16:43.848910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.515 qpair failed and we were unable to recover it. 00:34:13.515 [2024-07-26 01:16:43.849047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.515 [2024-07-26 01:16:43.849080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.515 qpair failed and we were unable to recover it. 00:34:13.515 [2024-07-26 01:16:43.849216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.515 [2024-07-26 01:16:43.849241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.515 qpair failed and we were unable to recover it. 00:34:13.515 [2024-07-26 01:16:43.849402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.515 [2024-07-26 01:16:43.849427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.515 qpair failed and we were unable to recover it. 00:34:13.515 [2024-07-26 01:16:43.849558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.515 [2024-07-26 01:16:43.849584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.515 qpair failed and we were unable to recover it. 00:34:13.515 [2024-07-26 01:16:43.849685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.515 [2024-07-26 01:16:43.849709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.515 qpair failed and we were unable to recover it. 00:34:13.515 [2024-07-26 01:16:43.849844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.515 [2024-07-26 01:16:43.849869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.515 qpair failed and we were unable to recover it. 00:34:13.515 [2024-07-26 01:16:43.849980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.515 [2024-07-26 01:16:43.850005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.515 qpair failed and we were unable to recover it. 00:34:13.515 [2024-07-26 01:16:43.850147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.515 [2024-07-26 01:16:43.850173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.515 qpair failed and we were unable to recover it. 00:34:13.515 [2024-07-26 01:16:43.850284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.515 [2024-07-26 01:16:43.850310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.515 qpair failed and we were unable to recover it. 00:34:13.515 [2024-07-26 01:16:43.850425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.515 [2024-07-26 01:16:43.850450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.515 qpair failed and we were unable to recover it. 00:34:13.515 [2024-07-26 01:16:43.850584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.515 [2024-07-26 01:16:43.850609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.515 qpair failed and we were unable to recover it. 00:34:13.515 [2024-07-26 01:16:43.850721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.515 [2024-07-26 01:16:43.850746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.515 qpair failed and we were unable to recover it. 00:34:13.515 [2024-07-26 01:16:43.850883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.515 [2024-07-26 01:16:43.850911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.515 qpair failed and we were unable to recover it. 00:34:13.515 [2024-07-26 01:16:43.851052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.515 [2024-07-26 01:16:43.851088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.515 qpair failed and we were unable to recover it. 00:34:13.515 [2024-07-26 01:16:43.851194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.515 [2024-07-26 01:16:43.851220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.515 qpair failed and we were unable to recover it. 00:34:13.515 [2024-07-26 01:16:43.851352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.515 [2024-07-26 01:16:43.851378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.515 qpair failed and we were unable to recover it. 00:34:13.515 [2024-07-26 01:16:43.851517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.515 [2024-07-26 01:16:43.851542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.515 qpair failed and we were unable to recover it. 00:34:13.515 [2024-07-26 01:16:43.851679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.515 [2024-07-26 01:16:43.851705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.515 qpair failed and we were unable to recover it. 00:34:13.515 [2024-07-26 01:16:43.851812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.515 [2024-07-26 01:16:43.851837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.515 qpair failed and we were unable to recover it. 00:34:13.515 [2024-07-26 01:16:43.851974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.515 [2024-07-26 01:16:43.851999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.515 qpair failed and we were unable to recover it. 00:34:13.515 [2024-07-26 01:16:43.852115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.515 [2024-07-26 01:16:43.852141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.515 qpair failed and we were unable to recover it. 00:34:13.515 [2024-07-26 01:16:43.852252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.515 [2024-07-26 01:16:43.852279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.515 qpair failed and we were unable to recover it. 00:34:13.515 [2024-07-26 01:16:43.852410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.515 [2024-07-26 01:16:43.852440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.515 qpair failed and we were unable to recover it. 00:34:13.515 [2024-07-26 01:16:43.852575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.515 [2024-07-26 01:16:43.852601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.515 qpair failed and we were unable to recover it. 00:34:13.515 [2024-07-26 01:16:43.852702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.515 [2024-07-26 01:16:43.852728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.515 qpair failed and we were unable to recover it. 00:34:13.515 [2024-07-26 01:16:43.852844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.515 [2024-07-26 01:16:43.852870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.515 qpair failed and we were unable to recover it. 00:34:13.515 [2024-07-26 01:16:43.853008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.515 [2024-07-26 01:16:43.853034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.515 qpair failed and we were unable to recover it. 00:34:13.515 [2024-07-26 01:16:43.853196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.515 [2024-07-26 01:16:43.853221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.515 qpair failed and we were unable to recover it. 00:34:13.515 [2024-07-26 01:16:43.853361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.515 [2024-07-26 01:16:43.853385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.515 qpair failed and we were unable to recover it. 00:34:13.515 [2024-07-26 01:16:43.853518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.515 [2024-07-26 01:16:43.853543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.515 qpair failed and we were unable to recover it. 00:34:13.515 [2024-07-26 01:16:43.853681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.516 [2024-07-26 01:16:43.853707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.516 qpair failed and we were unable to recover it. 00:34:13.516 [2024-07-26 01:16:43.853838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.516 [2024-07-26 01:16:43.853863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.516 qpair failed and we were unable to recover it. 00:34:13.516 [2024-07-26 01:16:43.853995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.516 [2024-07-26 01:16:43.854019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.516 qpair failed and we were unable to recover it. 00:34:13.516 [2024-07-26 01:16:43.854144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.516 [2024-07-26 01:16:43.854170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.516 qpair failed and we were unable to recover it. 00:34:13.516 [2024-07-26 01:16:43.854288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.516 [2024-07-26 01:16:43.854313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.516 qpair failed and we were unable to recover it. 00:34:13.516 [2024-07-26 01:16:43.854447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.516 [2024-07-26 01:16:43.854472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.516 qpair failed and we were unable to recover it. 00:34:13.516 [2024-07-26 01:16:43.854612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.516 [2024-07-26 01:16:43.854638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.516 qpair failed and we were unable to recover it. 00:34:13.516 [2024-07-26 01:16:43.854768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.516 [2024-07-26 01:16:43.854793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.516 qpair failed and we were unable to recover it. 00:34:13.516 [2024-07-26 01:16:43.854924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.516 [2024-07-26 01:16:43.854949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.516 qpair failed and we were unable to recover it. 00:34:13.516 [2024-07-26 01:16:43.855085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.516 [2024-07-26 01:16:43.855111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.516 qpair failed and we were unable to recover it. 00:34:13.516 [2024-07-26 01:16:43.855214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.516 [2024-07-26 01:16:43.855239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.516 qpair failed and we were unable to recover it. 00:34:13.516 [2024-07-26 01:16:43.855376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.516 [2024-07-26 01:16:43.855401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.516 qpair failed and we were unable to recover it. 00:34:13.516 [2024-07-26 01:16:43.855512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.516 [2024-07-26 01:16:43.855537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.516 qpair failed and we were unable to recover it. 00:34:13.516 [2024-07-26 01:16:43.855640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.516 [2024-07-26 01:16:43.855665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.516 qpair failed and we were unable to recover it. 00:34:13.516 [2024-07-26 01:16:43.855800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.516 [2024-07-26 01:16:43.855825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.516 qpair failed and we were unable to recover it. 00:34:13.516 [2024-07-26 01:16:43.855923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.516 [2024-07-26 01:16:43.855949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.516 qpair failed and we were unable to recover it. 00:34:13.516 [2024-07-26 01:16:43.856081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.516 [2024-07-26 01:16:43.856106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.516 qpair failed and we were unable to recover it. 00:34:13.516 [2024-07-26 01:16:43.856203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.516 [2024-07-26 01:16:43.856228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.516 qpair failed and we were unable to recover it. 00:34:13.516 [2024-07-26 01:16:43.856336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.516 [2024-07-26 01:16:43.856361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.516 qpair failed and we were unable to recover it. 00:34:13.516 [2024-07-26 01:16:43.856519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.516 [2024-07-26 01:16:43.856555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.516 qpair failed and we were unable to recover it. 00:34:13.516 [2024-07-26 01:16:43.856713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.516 [2024-07-26 01:16:43.856738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.516 qpair failed and we were unable to recover it. 00:34:13.516 [2024-07-26 01:16:43.856871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.516 [2024-07-26 01:16:43.856896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.516 qpair failed and we were unable to recover it. 00:34:13.516 [2024-07-26 01:16:43.857023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.516 [2024-07-26 01:16:43.857048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.516 qpair failed and we were unable to recover it. 00:34:13.516 [2024-07-26 01:16:43.857167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.516 [2024-07-26 01:16:43.857193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.516 qpair failed and we were unable to recover it. 00:34:13.516 [2024-07-26 01:16:43.857298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.516 [2024-07-26 01:16:43.857324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.516 qpair failed and we were unable to recover it. 00:34:13.516 [2024-07-26 01:16:43.857460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.516 [2024-07-26 01:16:43.857485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.516 qpair failed and we were unable to recover it. 00:34:13.516 [2024-07-26 01:16:43.857615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.516 [2024-07-26 01:16:43.857640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.516 qpair failed and we were unable to recover it. 00:34:13.516 [2024-07-26 01:16:43.857770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.516 [2024-07-26 01:16:43.857795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.516 qpair failed and we were unable to recover it. 00:34:13.516 [2024-07-26 01:16:43.857926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.516 [2024-07-26 01:16:43.857951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.516 qpair failed and we were unable to recover it. 00:34:13.516 [2024-07-26 01:16:43.858092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.516 [2024-07-26 01:16:43.858118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.516 qpair failed and we were unable to recover it. 00:34:13.516 [2024-07-26 01:16:43.858266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.516 [2024-07-26 01:16:43.858291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.516 qpair failed and we were unable to recover it. 00:34:13.516 [2024-07-26 01:16:43.858396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.516 [2024-07-26 01:16:43.858421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.516 qpair failed and we were unable to recover it. 00:34:13.516 [2024-07-26 01:16:43.858522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.516 [2024-07-26 01:16:43.858547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.516 qpair failed and we were unable to recover it. 00:34:13.516 [2024-07-26 01:16:43.858657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.516 [2024-07-26 01:16:43.858683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.516 qpair failed and we were unable to recover it. 00:34:13.516 [2024-07-26 01:16:43.858812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.516 [2024-07-26 01:16:43.858837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.516 qpair failed and we were unable to recover it. 00:34:13.516 [2024-07-26 01:16:43.858973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.516 [2024-07-26 01:16:43.858998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.516 qpair failed and we were unable to recover it. 00:34:13.516 [2024-07-26 01:16:43.859111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.516 [2024-07-26 01:16:43.859137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.516 qpair failed and we were unable to recover it. 00:34:13.516 [2024-07-26 01:16:43.859268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.516 [2024-07-26 01:16:43.859293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.516 qpair failed and we were unable to recover it. 00:34:13.516 [2024-07-26 01:16:43.859435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.516 [2024-07-26 01:16:43.859460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.516 qpair failed and we were unable to recover it. 00:34:13.516 [2024-07-26 01:16:43.859587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.516 [2024-07-26 01:16:43.859611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.516 qpair failed and we were unable to recover it. 00:34:13.516 [2024-07-26 01:16:43.859757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.517 [2024-07-26 01:16:43.859782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-26 01:16:43.859926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.517 [2024-07-26 01:16:43.859951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-26 01:16:43.860089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.517 [2024-07-26 01:16:43.860115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-26 01:16:43.860273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.517 [2024-07-26 01:16:43.860298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-26 01:16:43.860403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.517 [2024-07-26 01:16:43.860428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-26 01:16:43.860526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.517 [2024-07-26 01:16:43.860551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-26 01:16:43.860679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.517 [2024-07-26 01:16:43.860705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-26 01:16:43.860862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.517 [2024-07-26 01:16:43.860902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-26 01:16:43.861081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.517 [2024-07-26 01:16:43.861109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-26 01:16:43.861222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.517 [2024-07-26 01:16:43.861249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-26 01:16:43.861358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.517 [2024-07-26 01:16:43.861384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-26 01:16:43.861523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.517 [2024-07-26 01:16:43.861549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-26 01:16:43.861685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.517 [2024-07-26 01:16:43.861710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-26 01:16:43.861820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.517 [2024-07-26 01:16:43.861845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-26 01:16:43.861952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.517 [2024-07-26 01:16:43.861978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-26 01:16:43.862144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.517 [2024-07-26 01:16:43.862170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-26 01:16:43.862300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.517 [2024-07-26 01:16:43.862325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-26 01:16:43.862430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.517 [2024-07-26 01:16:43.862458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-26 01:16:43.862622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.517 [2024-07-26 01:16:43.862648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-26 01:16:43.862763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.517 [2024-07-26 01:16:43.862788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-26 01:16:43.862939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.517 [2024-07-26 01:16:43.862975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-26 01:16:43.863153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.517 [2024-07-26 01:16:43.863181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-26 01:16:43.863326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.517 [2024-07-26 01:16:43.863352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-26 01:16:43.863462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.517 [2024-07-26 01:16:43.863488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-26 01:16:43.863602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.517 [2024-07-26 01:16:43.863628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-26 01:16:43.863763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.517 [2024-07-26 01:16:43.863788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-26 01:16:43.863903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.517 [2024-07-26 01:16:43.863929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-26 01:16:43.864042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.517 [2024-07-26 01:16:43.864076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-26 01:16:43.864219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.517 [2024-07-26 01:16:43.864244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-26 01:16:43.864377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.517 [2024-07-26 01:16:43.864402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-26 01:16:43.864553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.517 [2024-07-26 01:16:43.864578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-26 01:16:43.864679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.517 [2024-07-26 01:16:43.864704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-26 01:16:43.864811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.517 [2024-07-26 01:16:43.864836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-26 01:16:43.864948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.517 [2024-07-26 01:16:43.864973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-26 01:16:43.865110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.517 [2024-07-26 01:16:43.865136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-26 01:16:43.865297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.517 [2024-07-26 01:16:43.865323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-26 01:16:43.865488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.517 [2024-07-26 01:16:43.865513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-26 01:16:43.865647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.517 [2024-07-26 01:16:43.865672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-26 01:16:43.865781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.517 [2024-07-26 01:16:43.865806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-26 01:16:43.865938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.517 [2024-07-26 01:16:43.865963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.517 [2024-07-26 01:16:43.866108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.517 [2024-07-26 01:16:43.866134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.517 qpair failed and we were unable to recover it. 00:34:13.518 [2024-07-26 01:16:43.866239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.518 [2024-07-26 01:16:43.866264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.518 qpair failed and we were unable to recover it. 00:34:13.518 [2024-07-26 01:16:43.866398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.518 [2024-07-26 01:16:43.866423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.518 qpair failed and we were unable to recover it. 00:34:13.518 [2024-07-26 01:16:43.866535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.518 [2024-07-26 01:16:43.866561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.518 qpair failed and we were unable to recover it. 00:34:13.518 [2024-07-26 01:16:43.866697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.518 [2024-07-26 01:16:43.866722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.518 qpair failed and we were unable to recover it. 00:34:13.518 [2024-07-26 01:16:43.866833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.518 [2024-07-26 01:16:43.866859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.518 qpair failed and we were unable to recover it. 00:34:13.518 [2024-07-26 01:16:43.867033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.518 [2024-07-26 01:16:43.867074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.518 qpair failed and we were unable to recover it. 00:34:13.518 [2024-07-26 01:16:43.867210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.518 [2024-07-26 01:16:43.867240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.518 qpair failed and we were unable to recover it. 00:34:13.518 [2024-07-26 01:16:43.867343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.518 [2024-07-26 01:16:43.867369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.518 qpair failed and we were unable to recover it. 00:34:13.518 [2024-07-26 01:16:43.867504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.518 [2024-07-26 01:16:43.867529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.518 qpair failed and we were unable to recover it. 00:34:13.518 [2024-07-26 01:16:43.867671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.518 [2024-07-26 01:16:43.867697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.518 qpair failed and we were unable to recover it. 00:34:13.518 [2024-07-26 01:16:43.867826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.518 [2024-07-26 01:16:43.867852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.518 qpair failed and we were unable to recover it. 00:34:13.518 [2024-07-26 01:16:43.867956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.518 [2024-07-26 01:16:43.867981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.518 qpair failed and we were unable to recover it. 00:34:13.518 [2024-07-26 01:16:43.868150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.518 [2024-07-26 01:16:43.868176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.518 qpair failed and we were unable to recover it. 00:34:13.518 [2024-07-26 01:16:43.868291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.518 [2024-07-26 01:16:43.868317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.518 qpair failed and we were unable to recover it. 00:34:13.518 [2024-07-26 01:16:43.868429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.518 [2024-07-26 01:16:43.868454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.518 qpair failed and we were unable to recover it. 00:34:13.518 [2024-07-26 01:16:43.868583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.518 [2024-07-26 01:16:43.868608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.518 qpair failed and we were unable to recover it. 00:34:13.518 [2024-07-26 01:16:43.868745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.518 [2024-07-26 01:16:43.868770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.518 qpair failed and we were unable to recover it. 00:34:13.518 [2024-07-26 01:16:43.868870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.518 [2024-07-26 01:16:43.868895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.518 qpair failed and we were unable to recover it. 00:34:13.518 [2024-07-26 01:16:43.869031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.518 [2024-07-26 01:16:43.869056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.518 qpair failed and we were unable to recover it. 00:34:13.518 [2024-07-26 01:16:43.869201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.518 [2024-07-26 01:16:43.869226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.518 qpair failed and we were unable to recover it. 00:34:13.518 [2024-07-26 01:16:43.869331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.518 [2024-07-26 01:16:43.869356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.518 qpair failed and we were unable to recover it. 00:34:13.518 [2024-07-26 01:16:43.869484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.518 [2024-07-26 01:16:43.869509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.518 qpair failed and we were unable to recover it. 00:34:13.518 [2024-07-26 01:16:43.869645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.518 [2024-07-26 01:16:43.869671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.518 qpair failed and we were unable to recover it. 00:34:13.518 [2024-07-26 01:16:43.869800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.518 [2024-07-26 01:16:43.869825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.518 qpair failed and we were unable to recover it. 00:34:13.518 [2024-07-26 01:16:43.869954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.518 [2024-07-26 01:16:43.869979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.518 qpair failed and we were unable to recover it. 00:34:13.518 [2024-07-26 01:16:43.870082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.518 [2024-07-26 01:16:43.870108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.518 qpair failed and we were unable to recover it. 00:34:13.518 [2024-07-26 01:16:43.870212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.518 [2024-07-26 01:16:43.870237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.518 qpair failed and we were unable to recover it. 00:34:13.518 [2024-07-26 01:16:43.870342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.518 [2024-07-26 01:16:43.870367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.518 qpair failed and we were unable to recover it. 00:34:13.518 [2024-07-26 01:16:43.870481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.518 [2024-07-26 01:16:43.870506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.518 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.870629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.870654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.870781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.870806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.870916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.870941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.871093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.871121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.871233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.871263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.871400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.871426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.871559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.871585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.871720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.871745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.871860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.871885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.872016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.872040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.872179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.872205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.872339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.872365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.872504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.872529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.872638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.872663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.872772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.872797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.872955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.872980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.873116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.873142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.873265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.873290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.873437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.873475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.873639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.873666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.873807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.873834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.873974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.874000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.874128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.874167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.874342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.874370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.874509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.874536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.874677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.874705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.874841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.874867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.874975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.875000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.875108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.875135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.875258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.875284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.875419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.875444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.875544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.875575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.875707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.875733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.875872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.875897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.876040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.876075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.876196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.876222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.876363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.876394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.876510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.876537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.876653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.876678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.876812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.876837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.876950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.876975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.877110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.877136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.877268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.877293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.877430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.877457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.877632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.877659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.877795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.877821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.877952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.877977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.878078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.878105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.878214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.878240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.878371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.878396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.878553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.878579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.519 [2024-07-26 01:16:43.878687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.519 [2024-07-26 01:16:43.878712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.519 qpair failed and we were unable to recover it. 00:34:13.520 [2024-07-26 01:16:43.878822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.520 [2024-07-26 01:16:43.878848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.520 qpair failed and we were unable to recover it. 00:34:13.520 [2024-07-26 01:16:43.878988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.520 [2024-07-26 01:16:43.879016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.520 qpair failed and we were unable to recover it. 00:34:13.520 [2024-07-26 01:16:43.879137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.520 [2024-07-26 01:16:43.879163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.520 qpair failed and we were unable to recover it. 00:34:13.520 [2024-07-26 01:16:43.879265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.520 [2024-07-26 01:16:43.879292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.520 qpair failed and we were unable to recover it. 00:34:13.520 [2024-07-26 01:16:43.879427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.520 [2024-07-26 01:16:43.879453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.520 qpair failed and we were unable to recover it. 00:34:13.520 [2024-07-26 01:16:43.879554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.520 [2024-07-26 01:16:43.879579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.520 qpair failed and we were unable to recover it. 00:34:13.520 [2024-07-26 01:16:43.879719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.520 [2024-07-26 01:16:43.879745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.520 qpair failed and we were unable to recover it. 00:34:13.520 [2024-07-26 01:16:43.879854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.520 [2024-07-26 01:16:43.879880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.520 qpair failed and we were unable to recover it. 00:34:13.520 [2024-07-26 01:16:43.880020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.520 [2024-07-26 01:16:43.880045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.520 qpair failed and we were unable to recover it. 00:34:13.520 [2024-07-26 01:16:43.880187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.520 [2024-07-26 01:16:43.880213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.520 qpair failed and we were unable to recover it. 00:34:13.520 [2024-07-26 01:16:43.880333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.520 [2024-07-26 01:16:43.880360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.520 qpair failed and we were unable to recover it. 00:34:13.520 [2024-07-26 01:16:43.880468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.520 [2024-07-26 01:16:43.880494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.520 qpair failed and we were unable to recover it. 00:34:13.520 [2024-07-26 01:16:43.880643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.520 [2024-07-26 01:16:43.880669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.520 qpair failed and we were unable to recover it. 00:34:13.520 [2024-07-26 01:16:43.880786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.520 [2024-07-26 01:16:43.880811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.520 qpair failed and we were unable to recover it. 00:34:13.520 [2024-07-26 01:16:43.880945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.520 [2024-07-26 01:16:43.880974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.520 qpair failed and we were unable to recover it. 00:34:13.520 [2024-07-26 01:16:43.881128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.520 [2024-07-26 01:16:43.881167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.520 qpair failed and we were unable to recover it. 00:34:13.520 [2024-07-26 01:16:43.881289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.520 [2024-07-26 01:16:43.881317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.520 qpair failed and we were unable to recover it. 00:34:13.520 [2024-07-26 01:16:43.881450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.520 [2024-07-26 01:16:43.881476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.520 qpair failed and we were unable to recover it. 00:34:13.520 [2024-07-26 01:16:43.881607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.520 [2024-07-26 01:16:43.881633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.520 qpair failed and we were unable to recover it. 00:34:13.520 [2024-07-26 01:16:43.881750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.520 [2024-07-26 01:16:43.881785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.520 qpair failed and we were unable to recover it. 00:34:13.520 [2024-07-26 01:16:43.881932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.520 [2024-07-26 01:16:43.881958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.520 qpair failed and we were unable to recover it. 00:34:13.520 [2024-07-26 01:16:43.882095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.520 [2024-07-26 01:16:43.882122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.520 qpair failed and we were unable to recover it. 00:34:13.520 [2024-07-26 01:16:43.882262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.520 [2024-07-26 01:16:43.882288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.520 qpair failed and we were unable to recover it. 00:34:13.520 [2024-07-26 01:16:43.882419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.520 [2024-07-26 01:16:43.882445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.520 qpair failed and we were unable to recover it. 00:34:13.520 [2024-07-26 01:16:43.882553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.520 [2024-07-26 01:16:43.882579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.520 qpair failed and we were unable to recover it. 00:34:13.520 [2024-07-26 01:16:43.882694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.520 [2024-07-26 01:16:43.882720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.520 qpair failed and we were unable to recover it. 00:34:13.520 [2024-07-26 01:16:43.882833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.520 [2024-07-26 01:16:43.882858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.520 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.882975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.883000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.883154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.883184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.883295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.883321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.883435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.883462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.883599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.883626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.883798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.883824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.884025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.884052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.884162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.884188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.884320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.884346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.884460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.884486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.884607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.884633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.884772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.884799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.884939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.884967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.885085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.885112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.885250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.885275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.885400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.885426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.885535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.885561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.885669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.885695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.885805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.885830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.885947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.885973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.886097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.886135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.886246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.886273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.886385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.886413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.886556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.886582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.886720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.886747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.886863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.886889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.887000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.887025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.887141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.887168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.887305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.887332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.887489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.887515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.887650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.887675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.887819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.887846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.887951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.887982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.888144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.888171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.888281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.888307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.888469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.888494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.888626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.888652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.888770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.888797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.888929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.888955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.889086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.889112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.889280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.889306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.889438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.889465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.889608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.889634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.889748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.889774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.889920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.889945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.890086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.890113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.890253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.890280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.793 qpair failed and we were unable to recover it. 00:34:13.793 [2024-07-26 01:16:43.890418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.793 [2024-07-26 01:16:43.890445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.890552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.890578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.890714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.890741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.890877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.890906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.891027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.891073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.891246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.891272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.891394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.891424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.891564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.891590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.891702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.891728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.891839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.891864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.892023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.892048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.892160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.892185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.892301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.892331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.892490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.892516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.892656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.892681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.892787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.892813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.892919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.892944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.893076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.893102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.893232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.893257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.893416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.893441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.893550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.893575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.893709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.893734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.893838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.893863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.893972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.893997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.894100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.894126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.894241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.894266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.894372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.894397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.894498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.894523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.894633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.894658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.894792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.894818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.894945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.894970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.895096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.895135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.895262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.895289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.895425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.895459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.895576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.895602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.895743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.895770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.895875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.895900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.896012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.896038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.896187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.896213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.896341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.896370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.896478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.896503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.896617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.896642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.896773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.896798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.896902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.896927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.897035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.897067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.897231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.897256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.897417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.897442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.897572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.897597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.897728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.897753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.897861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.897886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.898025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.898052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.898172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.898197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.898298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.898323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.898465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.898490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.898625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.898650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.898789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.794 [2024-07-26 01:16:43.898814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.794 qpair failed and we were unable to recover it. 00:34:13.794 [2024-07-26 01:16:43.898929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.898955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.899141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.899181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.899348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.899375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.899518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.899544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.899651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.899677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.899816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.899842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.899974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.900003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.900144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.900170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.900276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.900301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.900471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.900496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.900605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.900634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.900738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.900763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.900875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.900901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.901030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.901054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.901197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.901223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.901362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.901388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.901547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.901572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.901702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.901728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.901835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.901861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.901972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.901998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.902102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.902128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.902241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.902266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.902375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.902400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.902528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.902553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.902663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.902688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.902787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.902812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.902959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.903004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.903142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.903172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.903332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.903360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.903470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.903495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.903632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.903658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.903818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.903844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.903979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.904005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.904145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.904171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.904302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.904327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.904430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.904455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.904568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.904593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.904701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.904726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.904856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.904881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.904986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.905011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.905140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.905166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.905295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.905321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.905430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.905455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.905564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.905589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.905734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.905774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.905902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.905931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.906070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.906098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.906233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.906259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.906397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.906424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.906532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.906559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.906709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.906735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.906876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.906901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.907050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.907095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.907261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.907288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.907397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.907423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.907557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.907583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.907697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.907725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.795 [2024-07-26 01:16:43.907834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.795 [2024-07-26 01:16:43.907860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.795 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.908031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.908088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.908216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.908243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.908358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.908384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.908545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.908570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.908700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.908725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.908836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.908862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.908966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.908991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.909127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.909153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.909290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.909316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.909455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.909480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.909614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.909639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.909808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.909834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.909967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.909992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.910131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.910157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.910300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.910325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.910467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.910493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.910596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.910621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.910804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.910830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.910937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.910962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.911102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.911128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.911263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.911295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.911406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.911431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.911598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.911623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.911728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.911753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.911885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.911910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.912017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.912042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.912186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.912211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.912338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.912364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.912504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.912529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.912662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.912687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.912819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.912844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.912971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.912996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.913131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.913157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.913319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.913345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.913485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.913511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.913613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.913639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.913776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.913802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.913936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.913962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.914073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.914099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.914230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.914255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.914414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.914439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.914551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.914576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.914706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.914732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.914836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.914861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.914991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.915017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.915138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.915164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.915273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.915298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.915442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.915472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.915589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.915615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.915721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.915746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.796 [2024-07-26 01:16:43.915908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.796 [2024-07-26 01:16:43.915934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.796 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.916083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.916109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.916244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.916269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.916409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.916435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.916538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.916564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.916666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.916691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.916833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.916859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.916993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.917018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.917159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.917185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.917303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.917329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.917437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.917462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.917581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.917607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.917770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.917796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.917907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.917932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.918074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.918100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.918236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.918261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.918421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.918446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.918544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.918568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.918706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.918731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.918865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.918890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.919018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.919043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.919184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.919210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.919341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.919366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.919521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.919546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.919706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.919731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.919843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.919868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.919969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.919994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.920154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.920180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.920283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.920309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.920409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.920433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.920562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.920588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.920697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.920722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.920852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.920877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.920986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.921011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.921139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.921165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.921272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.921297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.921458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.921483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.921583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.921608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.921742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.921771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.921884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.921909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.922019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.922044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.922160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.922185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.922322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.922347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.922481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.922506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.922666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.922691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.922827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.922853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.922985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.923010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.923147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.923173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.923282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.923306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.923435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.923460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.923599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.923623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.923724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.923749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.923869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.923893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.924030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.924054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.924226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.924252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.924388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.924412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.924543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.924567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.924678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.924702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.924839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.924864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.925025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.797 [2024-07-26 01:16:43.925050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.797 qpair failed and we were unable to recover it. 00:34:13.797 [2024-07-26 01:16:43.925188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.925213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.925370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.925395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.925506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.925532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.925644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.925668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.925804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.925829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.925935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.925964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.926082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.926113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.926257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.926283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.926416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.926440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.926579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.926604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.926708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.926732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.926873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.926897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.927031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.927056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.927197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.927221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.927356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.927381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.927515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.927540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.927676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.927702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.927863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.927889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.928024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.928050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.928215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.928240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.928398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.928423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.928524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.928548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.928658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.928682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.928791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.928814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.928948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.928972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.929101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.929127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.929235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.929259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.929373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.929399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.929557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.929581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.929711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.929735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.929870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.929895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.930051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.930083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.930216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.930241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.930385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.930410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.930567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.930592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.930724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.930749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.930885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.930910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.931016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.931040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.931151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.931176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.931338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.931362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.931499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.931524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.931692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.931717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.931853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.931877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.932009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.932034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.932160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.932186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.932329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.932353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.932461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.932490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.932630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.932654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.932816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.932841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.932976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.933001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.933133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.933158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.933300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.933325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.933434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.933458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.933588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.933612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.933721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.933747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.933884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.933908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.934071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.934097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.934214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.798 [2024-07-26 01:16:43.934239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.798 qpair failed and we were unable to recover it. 00:34:13.798 [2024-07-26 01:16:43.934346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.934371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.934503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.934527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.934642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.934667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.934831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.934856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.934965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.934989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.935127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.935152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.935291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.935316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.935424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.935449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.935593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.935618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.935753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.935777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.935908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.935932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.936045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.936078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.936185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.936210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.936316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.936341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.936478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.936502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.936645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.936674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.936810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.936835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.936945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.936969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.937076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.937103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.937237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.937261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.937399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.937425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.937538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.937562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.937672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.937696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.937863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.937887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.938023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.938047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.938168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.938193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.938305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.938329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.938443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.938468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.938601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.938625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.938768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.938793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.938903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.938927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.939027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.939052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.939208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.939233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.939347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.939371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.939475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.939499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.939636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.939660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.939798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.939823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.939922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.939946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.940069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.940095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.940197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.940222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.940384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.940409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.940538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.940562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.940695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.940720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.940835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.940860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.940979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.941004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.941144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.941169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.941295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.941319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.941429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.941454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.941586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.941611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.941729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.941755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.941895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.799 [2024-07-26 01:16:43.941919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.799 qpair failed and we were unable to recover it. 00:34:13.799 [2024-07-26 01:16:43.942049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.942083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.942220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.942245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.942379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.942403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.942512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.942538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.942675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.942699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.942836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.942866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.942976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.943000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.943119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.943144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.943277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.943303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.943411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.943435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.943542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.943567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.943703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.943728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.943880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.943905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.944039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.944069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.944186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.944211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.944350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.944376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.944507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.944531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.944669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.944694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.944802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.944827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.944937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.944961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.945071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.945101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.945206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.945230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.945341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.945366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.945507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.945531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.945666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.945691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.945796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.945821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.945964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.945989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.946143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.946169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.946288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.946313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.946419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.946443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.946573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.946597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.946715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.946740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.946854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.946879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.947014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.947038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.947159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.947185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.947323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.947347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.947484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.947508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.947645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.947671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.947779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.947803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.947937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.947962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.948081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.948106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.948232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.948256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.948387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.948411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.948550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.948574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.948680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.948705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.948812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.948836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.948949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.948975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.949108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.949133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.949237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.949261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.949394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.949419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.949556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.949581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.949721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.949746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.949856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.949880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.950019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.950045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.950223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.950248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.950351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.950376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.950483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.950507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.950633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.950658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.950796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.800 [2024-07-26 01:16:43.950820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.800 qpair failed and we were unable to recover it. 00:34:13.800 [2024-07-26 01:16:43.950942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.950967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.951108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.951134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.951241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.951265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.951430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.951455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.951563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.951587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.951698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.951722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.951856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.951881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.952019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.952043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.952214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.952240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.952349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.952373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.952513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.952537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.952674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.952699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.952810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.952834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.952964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.952989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.953101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.953130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.953263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.953288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.953422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.953448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.953585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.953609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.953750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.953775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.953876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.953900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.954037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.954069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.954203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.954227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.954339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.954365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.954501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.954525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.954659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.954683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.954792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.954816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.954923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.954948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.955084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.955109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.955245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.955270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.955410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.955435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.955564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.955589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.955723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.955748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.955854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.955879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.955984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.956008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.956117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.956144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.956281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.956306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.956410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.956435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.956572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.956597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.956715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.956740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.956873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.956898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.957008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.957033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.957150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.957175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.957313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.957338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.957452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.957477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.957597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.957622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.957754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.957779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.957916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.957941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.958084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.958110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.958250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.958276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.958409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.958433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.958596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.958621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.958727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.958752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.958886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.958911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.959052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.959084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.959217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.959242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.959346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.959371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.959512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.959537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.959637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.959662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.959793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.801 [2024-07-26 01:16:43.959817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.801 qpair failed and we were unable to recover it. 00:34:13.801 [2024-07-26 01:16:43.959926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.959951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.960076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.960101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.960213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.960239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.960357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.960382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.960489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.960514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.960647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.960672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.960801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.960826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.960928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.960953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.961067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.961093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.961228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.961253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.961388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.961413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.961511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.961536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.961674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.961699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.961829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.961854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.961990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.962015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.962129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.962154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.962252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.962277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.962402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.962427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.962563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.962588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.962703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.962728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.962834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.962859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.963023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.963048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.963213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.963238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.963356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.963384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.963515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.963541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.963649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.963674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.963809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.963833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.963962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.963987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.964126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.964151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.964311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.964337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.964474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.964499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.964625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.964649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.964759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.964785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.964947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.964971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.965111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.965136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.965270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.965295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.965431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.965456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.965598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.965623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.965760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.965784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.965921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.965946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.966050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.966081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.966212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.966237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.966352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.966377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.966482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.966506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.966639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.966663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.966800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.966825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.966929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.966953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.967066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.967090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.967229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.967254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.967386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.967411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.967558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.967582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.967724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.802 [2024-07-26 01:16:43.967748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.802 qpair failed and we were unable to recover it. 00:34:13.802 [2024-07-26 01:16:43.967853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.967877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.968038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.968068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.968181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.968206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.968314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.968338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.968444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.968469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.968603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.968627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.968765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.968789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.968928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.968953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.969070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.969095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.969212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.969237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.969370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.969394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.969554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.969579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.969702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.969731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.969865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.969889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.970027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.970052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.970171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.970196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.970301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.970325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.970482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.970508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.970644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.970668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.970808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.970833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.970947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.970973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.971112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.971137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.971299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.971325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.971460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.971485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.971621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.971645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.971781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.971806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.971922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.971946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.972057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.972090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.972198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.972222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.972353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.972378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.972490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.972516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.972677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.972701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.972810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.972835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.972965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.972989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.973090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.973115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.973224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.973249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.973348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.973374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.973506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.973530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.973671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.973696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.973805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.973836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.973974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.973999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.974114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.974139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.974273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.974298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.974408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.974432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.974552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.974577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.974684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.974708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.974837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.974862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.974973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.974998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.975150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.975176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.975316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.975340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.975460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.975485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.975621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.975646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.975759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.975784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.975917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.803 [2024-07-26 01:16:43.975942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.803 qpair failed and we were unable to recover it. 00:34:13.803 [2024-07-26 01:16:43.976084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.976109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.976216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.976240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.976375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.976400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.976536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.976560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.976673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.976697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.976826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.976851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.976956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.976981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.977149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.977175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.977289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.977314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.977412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.977435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.977574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.977600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.977712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.977736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.977900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.977925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.978029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.978054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.978180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.978206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.978353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.978377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.978510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.978534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.978667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.978692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.978823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.978847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.978946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.978973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.979107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.979132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.979266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.979291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.979423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.979448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.979585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.979609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.979716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.979741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.979875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.979899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.980011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.980040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.980219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.980244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.980371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.980395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.980503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.980528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.980637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.980661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.980791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.980815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.980951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.980976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.981134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.981158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.981264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.981289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.981402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.981426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.981557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.981581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.981713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.981737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.981881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.981905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.982012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.982037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.982186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.982211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.982320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.982345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.982476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.982499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.982634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.982659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.982773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.982799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.982937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.982961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.983097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.983123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.983241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.983265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.983401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.983425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.983561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.983585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.983699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.983725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.983855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.983880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.984018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.984042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.984185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.984213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.984327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.984351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.984461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.984485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.984625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.984650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.984782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.984807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.804 qpair failed and we were unable to recover it. 00:34:13.804 [2024-07-26 01:16:43.984971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.804 [2024-07-26 01:16:43.984996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.985143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.985169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.985282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.985307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.985412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.985437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.985600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.985624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.985760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.985784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.985916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.985941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.986067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.986093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.986199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.986232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.986359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.986398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.986541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.986569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.986683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.986711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.986850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.986876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.987016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.987042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.987162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.987188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.987317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.987344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.987492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.987518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.987660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.987686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.987822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.987849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.987964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.987992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.988129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.988155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.988274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.988300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.988418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.988447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.988580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.988605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.988762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.988786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.988902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.988926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.989070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.989096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.989204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.989228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.989340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.989365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.989477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.989501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.989631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.989656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.989762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.989786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.989896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.989921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.990065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.990090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.990226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.990251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.990410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.990435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.990547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.990576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.990720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.990746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.990884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.990910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.991041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.991073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.991184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.991213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.991329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.991355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.991521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.991546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.991684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.991710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.991845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.991872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.992000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.992026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.992144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.992169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.992302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.992327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.992462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.992487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.992625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.992653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.992783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.992808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.992937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.992961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.805 [2024-07-26 01:16:43.993101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.805 [2024-07-26 01:16:43.993127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.805 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.993261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.993286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.993415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.993439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.993597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.993621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.993733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.993757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.993866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.993890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.994020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.994046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.994180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.994204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.994308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.994333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.994494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.994519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.994623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.994647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.994761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.994790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.994936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.994963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.995106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.995133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.995273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.995299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.995430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.995456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.995588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.995615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.995726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.995752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.995856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.995881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.996017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.996042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.996170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.996195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.996332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.996356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.996498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.996523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.996634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.996662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.996806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.996832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.996976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.997002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.997113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.997141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.997260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.997288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.997430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.997457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.997590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.997616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.997778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.997804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.997932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.997958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.998120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.998146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.998253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.998278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.998388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.998411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.998566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.998591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.998744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.998769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.998910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.998934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.999052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.999086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.999230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.999257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.999364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.999390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.999520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.999546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.999705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.999731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:43.999865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:43.999891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:44.000022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:44.000048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:44.000190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:44.000216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.806 [2024-07-26 01:16:44.000326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.806 [2024-07-26 01:16:44.000352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.806 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.000489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.000515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.000658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.000685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.000845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.000871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.001043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.001092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.001236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.001268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.001379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.001404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.001508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.001534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.001639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.001663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.001763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.001789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.001948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.001973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.002108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.002134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.002296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.002321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.002453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.002479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.002585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.002609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.002722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.002747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.002889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.002914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.003049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.003080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.003214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.003240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.003354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.003379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.003515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.003541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.003678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.003703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.003805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.003830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.003928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.003953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.004113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.004140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.004248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.004273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.004401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.004426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.004557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.004582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.004692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.004719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.004881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.004907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.005013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.005038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.005164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.005190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.005321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.005346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.005461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.005487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.005593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.005618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.005746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.005771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.005933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.005959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.006057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.006090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.006190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.006215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.006352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.006377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.006482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.006507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.006613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.006638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.006746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.006771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.006928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.006953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.007085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.007112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.007218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.007244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.007357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.007389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.007490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.007516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.007676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.007701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.007807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.007832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.007928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.007954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.008119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.008145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.008245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.008270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.008400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.008426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.008585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.008611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.008764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.008789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.008917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.008942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.009050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.009083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.009242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.009267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.807 [2024-07-26 01:16:44.009393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.807 [2024-07-26 01:16:44.009418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.807 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.009538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.009564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.009707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.009732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.009842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.009867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.009963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.009988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.010126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.010152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.010288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.010312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.010451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.010476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.010608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.010633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.010736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.010761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.010867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.010893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.011056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.011088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.011185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.011210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.011310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.011335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.011451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.011496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.011662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.011689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.011823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.011850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.011956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.011982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.012126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.012166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.012318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.012346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.012483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.012511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.012674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.012701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.012822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.012849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.013011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.013039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.013148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.013175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.013287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.013313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.013423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.013450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.013569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.013595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.013738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.013765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.013903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.013931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.014046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.014081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.014203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.014229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.014368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.014393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.014554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.014579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.014717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.014743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.014848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.014873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.015007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.015032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.015168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.015193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.015303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.015328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.015433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.015458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.015569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.015594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.015709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.015739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.015898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.015923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.016035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.016068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.016179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.016204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.016308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.016334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.016435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.016460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.016593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.016618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.016754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.016779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.016909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.016934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.017070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.017096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.017199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.017224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.017327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.017353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.017459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.017484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.017585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.017610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.017763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.017790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.808 [2024-07-26 01:16:44.017897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.808 [2024-07-26 01:16:44.017923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.808 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.018041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.018087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.018237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.018266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.018426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.018452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.018587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.018614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.018750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.018777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.018937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.018962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.019072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.019100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.019211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.019237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.019344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.019369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.019503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.019528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.019692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.019717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.019818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.019848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.020016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.020043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.020191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.020218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.020322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.020346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.020480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.020506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.020618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.020644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.020773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.020799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.020939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.020965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.021084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.021110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.021243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.021268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.021371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.021396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.021507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.021533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.021644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.021669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.021830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.021855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.021970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.021994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.022121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.022147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.022274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.022300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.022410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.022434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.022599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.022625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.022736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.022761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.022901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.022926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.023031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.023056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.023201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.023226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.023389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.023414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.023541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.023566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.023679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.023704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.023811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.023836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.023944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.023969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.024079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.024105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.024240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.024265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.024407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.024433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.024533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.024558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.024724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.024749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.024884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.024909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.025018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.025044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.025162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.025187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.025324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.025349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.025453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.025478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.025606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.025631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.025763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.809 [2024-07-26 01:16:44.025789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.809 qpair failed and we were unable to recover it. 00:34:13.809 [2024-07-26 01:16:44.025923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.025948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.026102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.026141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.026282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.026309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.026450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.026477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.026615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.026642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.026786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.026813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.026947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.026973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.027105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.027132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.027240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.027271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.027415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.027442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.027575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.027601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.027723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.027750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.027885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.027912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.028046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.028078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.028214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.028246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.028411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.028438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.028550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.028576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.028711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.028736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.028902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.028928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.029076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.029102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.029215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.029242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.029386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.029412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.029515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.029542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.029649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.029676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.029839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.029866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.029994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.030020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.030163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.030190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.030327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.030354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.030474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.030499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.030637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.030663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.030778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.030805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.030966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.030991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.031113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.031139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.031276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.031303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.031403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.031429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.031558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.031584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.031746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.031773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.031910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.031938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.032077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.032104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.032215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.032240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.032353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.032379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.032486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.032516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.032652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.032677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.032780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.032805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.032906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.032931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.033036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.033072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.033216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.033241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.033349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.033375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.033534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.033560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.033699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.033728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.033865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.033892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.034033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.034065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.034205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.034231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.034369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.034395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.034506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.034533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.034671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.034697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.034830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.034856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.034997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.035024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.810 qpair failed and we were unable to recover it. 00:34:13.810 [2024-07-26 01:16:44.035137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.810 [2024-07-26 01:16:44.035164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.035291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.035316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.035449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.035475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.035583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.035609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.035748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.035774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.035887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.035914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.036019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.036045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.036205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.036231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.036367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.036394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.036552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.036577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.036682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.036707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.036844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.036869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.036970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.036995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.037148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.037174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.037321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.037347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.037453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.037478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.037576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.037602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.037714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.037740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.037875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.037900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.038024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.038049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.038176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.038202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.038339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.038364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.038469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.038494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.038607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.038632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.038797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.038826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.038929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.038956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.039102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.039128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.039266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.039292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.039451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.039477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.039640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.039667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.039804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.039830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.039991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.040017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.040167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.040194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.040295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.040321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.040430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.040457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.040572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.040598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.040733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.040760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.040913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.040951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.041085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.041119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.041287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.041313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.041443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.041469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.041574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.041600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.041733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.041759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.041918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.041943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.042104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.042130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.042260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.042286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.042400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.042425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.042564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.042589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.042697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.042723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.042852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.042877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.043006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.043032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.043179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.043205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.043317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.043342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.043454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.043479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.043609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.043634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.043767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.043792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.043903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.043929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.044031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.044064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.811 [2024-07-26 01:16:44.044182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.811 [2024-07-26 01:16:44.044208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.811 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.044318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.044343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.044483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.044508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.044629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.044654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.044787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.044812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.044935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.044961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.045104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.045136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.045885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.045915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.046055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.046088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.046227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.046252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.046367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.046394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.046531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.046556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.046665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.046689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.046823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.046849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.046966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.046992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.047110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.047137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.047243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.047269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.047382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.047408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.047509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.047535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.047639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.047666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.047833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.047858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.047967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.047993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.048114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.048140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.048251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.048276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.048410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.048436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.048567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.048593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.048695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.048720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.048855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.048880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.049017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.049042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.049163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.049189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.049292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.049318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.049428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.049453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.049585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.049610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.049730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.049760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.049866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.049891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.050021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.050046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.050172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.050198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.050307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.050333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.050438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.050463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.050593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.050618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.050744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.050770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.050906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.050932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.051072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.051098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.051206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.051232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.051388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.051413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.051520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.051546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.051649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.051674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.051816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.812 [2024-07-26 01:16:44.051842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.812 qpair failed and we were unable to recover it. 00:34:13.812 [2024-07-26 01:16:44.052004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.052029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.052143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.052169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.052273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.052298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.052402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.052427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.052585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.052609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.052738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.052763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.052869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.052894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.053007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.053032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.053179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.053205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.053312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.053338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.053440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.053465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.053607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.053633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.053741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.053766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.053899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.053924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.054036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.054078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.054197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.054223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.054325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.054350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.054448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.054473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.054591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.054617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.054723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.054749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.054882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.054907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.055020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.055045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.055196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.055221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.055351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.055376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.055501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.055526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.055691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.055716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.055855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.055883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.056023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.056048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.056194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.056219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.056326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.056353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.056496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.056521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.056683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.056708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.056839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.056864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.056998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.057023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.057136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.057161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.057291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.057316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.057457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.057482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.057583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.057607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.057762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.057787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.057893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.057919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.058049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.058082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.058187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.058213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.058325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.058350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.058480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.058505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.058641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.058666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.058804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.058829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.058959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.058984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.059117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.059143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.059248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.059273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.059377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.059402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.059504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.059529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.059631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.059656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.059788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.059813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.059914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.059939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.060074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.060100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.060236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.060261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.060358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.060383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.060489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.060514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.060641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.060667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.060799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.813 [2024-07-26 01:16:44.060824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.813 qpair failed and we were unable to recover it. 00:34:13.813 [2024-07-26 01:16:44.060939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.060966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.061107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.061133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.061240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.061265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.061376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.061402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.061538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.061563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.061689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.061713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.061843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.061868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.061965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.061991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.062114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.062139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.062274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.062300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.062438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.062463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.062568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.062593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.062710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.062735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.062838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.062863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.063021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.063046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.063174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.063199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.063334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.063359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.063517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.063542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.063654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.063680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.063796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.063821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.063957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.063982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.064126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.064153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.064283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.064308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.064426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.064451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.064615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.064640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.064768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.064793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.064927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.064952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.065104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.065130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.065267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.065296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.065419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.065445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.065608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.065633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.065744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.065769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.065874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.065899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.066067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.066092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.066206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.066236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.066336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.066361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.066491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.066516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.066653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.066678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.066813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.066837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.066956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.066981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.067089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.067115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.067274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.067299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.067440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.067465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.067623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.067648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.067765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.067791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.067921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.067946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.068090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.068116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.068252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.068277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.068419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.068445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.068539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.068564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.068676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.068701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.068850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.068876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.069007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.069032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.069175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.069201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.069310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.069335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.069467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.069493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.069624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.069649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.814 [2024-07-26 01:16:44.069771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.814 [2024-07-26 01:16:44.069797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.814 qpair failed and we were unable to recover it. 00:34:13.815 [2024-07-26 01:16:44.069917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.815 [2024-07-26 01:16:44.069942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.815 qpair failed and we were unable to recover it. 00:34:13.815 [2024-07-26 01:16:44.070095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.815 [2024-07-26 01:16:44.070121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.815 qpair failed and we were unable to recover it. 00:34:13.815 [2024-07-26 01:16:44.070236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.815 [2024-07-26 01:16:44.070263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.815 qpair failed and we were unable to recover it. 00:34:13.815 [2024-07-26 01:16:44.070359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.815 [2024-07-26 01:16:44.070384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.815 qpair failed and we were unable to recover it. 00:34:13.815 [2024-07-26 01:16:44.070493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.815 [2024-07-26 01:16:44.070518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.815 qpair failed and we were unable to recover it. 00:34:13.815 [2024-07-26 01:16:44.070667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.815 [2024-07-26 01:16:44.070692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.815 qpair failed and we were unable to recover it. 00:34:13.815 [2024-07-26 01:16:44.070791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.815 [2024-07-26 01:16:44.070816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.815 qpair failed and we were unable to recover it. 00:34:13.815 [2024-07-26 01:16:44.070928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.815 [2024-07-26 01:16:44.070953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.815 qpair failed and we were unable to recover it. 00:34:13.815 [2024-07-26 01:16:44.071048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.815 [2024-07-26 01:16:44.071082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.815 qpair failed and we were unable to recover it. 00:34:13.815 [2024-07-26 01:16:44.071183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.815 [2024-07-26 01:16:44.071208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.815 qpair failed and we were unable to recover it. 00:34:13.815 [2024-07-26 01:16:44.071368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.815 [2024-07-26 01:16:44.071393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.815 qpair failed and we were unable to recover it. 00:34:13.815 [2024-07-26 01:16:44.071554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.815 [2024-07-26 01:16:44.071579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.815 qpair failed and we were unable to recover it. 00:34:13.815 [2024-07-26 01:16:44.071715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.815 [2024-07-26 01:16:44.071740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.815 qpair failed and we were unable to recover it. 00:34:13.815 [2024-07-26 01:16:44.071882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.815 [2024-07-26 01:16:44.071907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.815 qpair failed and we were unable to recover it. 00:34:13.815 [2024-07-26 01:16:44.072027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.815 [2024-07-26 01:16:44.072052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.815 qpair failed and we were unable to recover it. 00:34:13.815 [2024-07-26 01:16:44.072208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.815 [2024-07-26 01:16:44.072233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.815 qpair failed and we were unable to recover it. 00:34:13.815 [2024-07-26 01:16:44.072343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.815 [2024-07-26 01:16:44.072369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.815 qpair failed and we were unable to recover it. 00:34:13.815 [2024-07-26 01:16:44.072516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.815 [2024-07-26 01:16:44.072546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.815 qpair failed and we were unable to recover it. 00:34:13.815 [2024-07-26 01:16:44.072697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.815 [2024-07-26 01:16:44.072751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.815 qpair failed and we were unable to recover it. 00:34:13.815 [2024-07-26 01:16:44.072880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.815 [2024-07-26 01:16:44.072911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.815 qpair failed and we were unable to recover it. 00:34:13.815 [2024-07-26 01:16:44.073072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.815 [2024-07-26 01:16:44.073099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.815 qpair failed and we were unable to recover it. 00:34:13.815 [2024-07-26 01:16:44.073213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.815 [2024-07-26 01:16:44.073239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.815 qpair failed and we were unable to recover it. 00:34:13.815 [2024-07-26 01:16:44.073344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.815 [2024-07-26 01:16:44.073371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.815 qpair failed and we were unable to recover it. 00:34:13.815 [2024-07-26 01:16:44.073482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.815 [2024-07-26 01:16:44.073508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.073650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.073676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.073833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.073858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.074000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.074026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.074179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.074205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.074318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.074343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.074454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.074479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.074587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.074613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.074727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.074752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.074902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.074927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.075069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.075095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.075233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.075258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.075398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.075424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.075590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.075616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.075716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.075741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.075878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.075903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.076021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.076046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.076191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.076216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.076375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.076401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.076543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.076568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.076700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.076724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.076866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.076895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.076996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.077021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.077176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.077201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.077307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.077332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.077494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.077519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.077657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.077682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.077793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.077818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.077936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.077961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.078077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.078112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.078275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.078301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.078446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.078471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.078584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.078609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.078780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.078806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.078937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.078962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.079079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.079116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.079258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.079283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.079418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.079443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.079543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.079568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.079680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.079706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.079834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.079858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.079966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.079991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.080138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.080164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.080296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.080321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.080466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.080491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.080630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.080655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.080791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.080816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.080919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.080944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.081081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.081119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.081217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.081242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.816 [2024-07-26 01:16:44.081412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.816 [2024-07-26 01:16:44.081437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.816 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.081580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.081605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.081719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.081744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.081872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.081898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.082001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.082027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.082155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.082180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.082281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.082306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.082424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.082450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.082567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.082594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.082723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.082748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.082849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.082874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.082982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.083008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.083166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.083205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.083333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.083361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.083526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.083552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.083686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.083712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.083851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.083877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.083984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.084010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.084150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.084178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.084322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.084349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.084456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.084482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.084615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.084641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.084760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.084786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.084890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.084915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.085021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.085046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.085177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.085203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.085316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.085341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.085444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.085469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.085616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.085641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.085771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.085799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.085913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.085939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.086077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.086103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.086237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.086262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.086363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.086387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.086553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.086578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.086679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.086704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.086820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.086849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.086972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.086998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.087142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.087170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.087276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.087303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.087464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.087489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.087594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.087618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.087778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.087803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.087969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.087994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.088127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.088153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.088288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.088315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.088479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.088506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.088611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.088638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.088769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.088795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.088941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.088967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.089078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.089105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.089219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.089244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.089353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.089380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.089527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.089553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.089712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.089738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.089853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.089880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.090018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.090044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.090195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.090222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.090333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.090358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.817 [2024-07-26 01:16:44.090502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.817 [2024-07-26 01:16:44.090527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.817 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.090665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.090691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.090852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.090877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.090980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.091004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.091111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.091137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.091274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.091299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.091439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.091464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.091606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.091635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.091753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.091780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.091894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.091920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.092055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.092087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.092248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.092274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.092416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.092442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.092601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.092627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.092762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.092789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.092926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.092952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.093086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.093114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.093257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.093282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.093415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.093440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.093536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.093560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.093681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.093710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.093861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.093886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.094043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.094106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.094239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.094265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.094409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.094435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.094562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.094587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.094688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.094714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.094826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.094851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.094970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.095010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.095168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.095196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.095363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.095389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.095519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.095545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.095686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.095712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.095817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.095843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.096025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.096052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.096178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.096203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.096307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.096332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.096469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.096494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.096620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.096645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.096752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.096777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.096950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.096977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.097148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.097175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.097285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.097310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.097450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.097476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.097616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.097643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.097775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.097801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.097919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.097945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.098108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.098140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.098274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.098299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.098409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.098434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.098549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.098574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.098679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.098704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.098833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.098857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.099008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.099034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.099165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.099191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.099327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.099352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.099528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.099554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.099698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.099723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.099855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.099880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.818 [2024-07-26 01:16:44.100017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.818 [2024-07-26 01:16:44.100042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.818 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.100225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.100250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.100417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.100446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.100563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.100590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.100726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.100752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.100859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.100886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.101032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.101065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.101213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.101239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.101359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.101386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.101518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.101543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.101707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.101732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.101835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.101860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.101968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.101994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.102121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.102147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.102277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.102302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.102435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.102460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.102610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.102636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.102764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.102789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.102923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.102948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.103108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.103134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.103267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.103293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.103424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.103449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.103549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.103574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.103712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.103737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.103880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.103905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.104044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.104076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.104216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.104241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.104354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.104380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.104488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.104514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.104645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.104670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.104774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.104799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.104972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.104997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.105142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.105168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.105276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.105311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.105451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.105476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.105607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.105632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.105787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.105813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.105920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.105945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.106050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.106082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.106195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.106220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.106380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.106405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.106538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.106563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.106701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.106726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.106835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.106860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.106998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.107023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.107189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.107215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.107324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.107356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.107475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.107500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.107643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.107668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.107780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.107807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.107916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.107941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.108138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.108164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.108300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.108327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.108460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.108487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.819 [2024-07-26 01:16:44.108619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.819 [2024-07-26 01:16:44.108645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.819 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.108745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.108771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.108871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.108899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.109040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.109072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.109244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.109270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.109382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.109412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.109579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.109604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.109703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.109728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.109832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.109856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.109977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.110004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.110142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.110168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.110278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.110303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.110440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.110465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.110575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.110599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.110731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.110756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.110890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.110916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.111055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.111089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.111261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.111286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.111400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.111425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.111534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.111559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.111688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.111712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.111825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.111850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.111981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.112006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.112167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.112192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.112302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.112327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.112427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.112452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.112578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.112603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.112709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.112734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.112831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.112856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.112975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.113014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.113163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.113193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.113333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.113359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.113467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.113494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.113611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.113637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.113766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.113792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.113908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.113934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.114099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.114125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.114233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.114258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.114395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.114420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.114535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.114561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.114692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.114717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.114859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.114887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.115028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.115055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.115201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.115228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.115354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.115379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.115489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.115515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.115628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.115653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.115764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.115791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.115929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.115955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.116120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.116146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.116304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.116336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.116474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.116500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.116658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.116683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.116780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.116805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.116919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.116945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.117049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.117079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.117193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.117219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.117341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.117367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.117468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.117493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.820 [2024-07-26 01:16:44.117623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.820 [2024-07-26 01:16:44.117648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.820 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.117766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.117791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.117932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.117957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.118073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.118110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.118245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.118270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.118408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.118433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.118539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.118564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.118673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.118698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.118832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.118857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.118993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.119018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.119147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.119174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.119309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.119338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.119451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.119476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.119585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.119611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.119747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.119772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.119880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.119906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.120033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.120067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.120169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.120194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.120326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.120351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.120486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.120511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.120621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.120646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.120756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.120781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.120915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.120940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.121096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.121122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.121227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.121252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.121389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.121415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.121526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.121551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.121682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.121707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.121842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.121867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.121976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.122001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.122145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.122185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.122336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.122365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.122497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.122523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.122685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.122710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.122826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.122852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.122991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.123017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.123156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.123182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.123321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.123348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.123483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.123514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.123652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.123678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.123815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.123842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.123950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.123975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.124095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.124121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.124281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.124307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.124407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.124432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.124564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.124589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.124733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.124759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.124884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.124909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.125042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.125073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.125233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.125259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.125365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.125390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.125526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.125551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.125721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.125746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.821 qpair failed and we were unable to recover it. 00:34:13.821 [2024-07-26 01:16:44.125877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.821 [2024-07-26 01:16:44.125902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.126073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.126099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.126212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.126238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.126398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.126423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.126556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.126581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.126721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.126746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.126880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.126907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.127070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.127096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.127207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.127233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.127349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.127375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.127513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.127539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.127659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.127684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.127833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.127860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.128005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.128030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.128139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.128166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.128293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.128319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.128450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.128475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.128638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.128664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.128774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.128799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.128927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.128952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.129092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.129119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.129221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.129246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.129358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.129384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.129520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.129546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.129663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.129689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.129849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.129879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.129983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.130009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.130147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.130173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.130283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.130309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.130444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.130471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.130604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.130630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.130786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.130811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.130938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.130964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.131083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.131109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.131223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.131249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.131413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.131438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.131570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.131595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.131728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.131754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.131924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.131950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.132087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.132112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.132218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.132243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.132385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.132410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.132552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.132578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.132708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.132734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.132894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.132920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.133056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.133087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.133226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.133252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.133415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.133440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.133582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.133609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.822 [2024-07-26 01:16:44.133747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.822 [2024-07-26 01:16:44.133773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.822 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.133911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.133937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.134082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.134109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.134251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.134276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.134412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.134438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.134569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.134596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.134737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.134762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.134897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.134923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.135031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.135057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.135173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.135199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.135313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.135339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.135467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.135492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.135602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.135628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.135747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.135772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.135909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.135936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.136079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.136105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.136246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.136276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.136379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.136405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.136544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.136569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.136686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.136712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.136845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.136871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.136984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.137010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.137137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.137164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.137296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.137321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.137481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.137507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.137644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.137670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.137791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.137816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.137925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.137952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.138093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.138120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.138251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.138275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.138381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.138406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.138521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.138548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.138657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.138683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.138846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.138873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.139035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.139068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.139182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.139208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.139306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.139332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.139494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.139519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.139640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.139666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.139796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.139823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.139961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.139986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.140147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.140173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.140278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.140304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.140438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.140464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.140578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.140603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.140741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.140767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.140939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.140964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.141084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.141122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.141280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.141318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.141471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.141498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.141626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.141652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.141788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.141813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.141972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.141997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.142139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.142168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.142303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.142329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.142439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.823 [2024-07-26 01:16:44.142465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.823 qpair failed and we were unable to recover it. 00:34:13.823 [2024-07-26 01:16:44.142600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.142631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.142799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.142824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.142968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.142993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.143129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.143156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.143282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.143308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.143422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.143449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.143583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.143609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.143746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.143772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.143888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.143914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.144045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.144076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.144238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.144264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.144400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.144425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.144561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.144588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.144719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.144744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.144903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.144929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.145053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.145085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.145224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.145249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.145384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.145410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.145548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.145573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.145736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.145761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.145902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.145929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.146039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.146079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.146254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.146279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.146387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.146413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.146551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.146576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.146708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.146734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.146868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.146894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.147044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.147106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.147253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.147281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.147399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.147425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.147535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.147560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.147679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.147704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.147842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.147867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.147998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.148023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.148137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.148162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.148304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.148329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.148439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.148464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.148570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.148594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.148699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.148724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.148830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.148855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.148971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.148996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.149119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.149145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.149277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.149302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.149432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.149457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.149599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.149624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.149752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.149777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.149915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.149940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.150046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.150078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.150238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.150263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.150372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.150397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.150536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.150561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.150664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.150689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.150798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.150823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.150928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.150953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.151085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.151122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.151233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.151258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.151393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.151418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.824 qpair failed and we were unable to recover it. 00:34:13.824 [2024-07-26 01:16:44.151583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.824 [2024-07-26 01:16:44.151608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.825 qpair failed and we were unable to recover it. 00:34:13.825 [2024-07-26 01:16:44.151740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.825 [2024-07-26 01:16:44.151764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.825 qpair failed and we were unable to recover it. 00:34:13.825 [2024-07-26 01:16:44.151918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.825 [2024-07-26 01:16:44.151943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.825 qpair failed and we were unable to recover it. 00:34:13.825 [2024-07-26 01:16:44.152048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.825 [2024-07-26 01:16:44.152081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.825 qpair failed and we were unable to recover it. 00:34:13.825 [2024-07-26 01:16:44.152201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.825 [2024-07-26 01:16:44.152226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.825 qpair failed and we were unable to recover it. 00:34:13.825 [2024-07-26 01:16:44.152384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.825 [2024-07-26 01:16:44.152410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.825 qpair failed and we were unable to recover it. 00:34:13.825 [2024-07-26 01:16:44.152546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.825 [2024-07-26 01:16:44.152571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.825 qpair failed and we were unable to recover it. 00:34:13.825 [2024-07-26 01:16:44.152692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.825 [2024-07-26 01:16:44.152717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.825 qpair failed and we were unable to recover it. 00:34:13.825 [2024-07-26 01:16:44.152853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.825 [2024-07-26 01:16:44.152879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.825 qpair failed and we were unable to recover it. 00:34:13.825 [2024-07-26 01:16:44.152991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.825 [2024-07-26 01:16:44.153016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.825 qpair failed and we were unable to recover it. 00:34:13.825 [2024-07-26 01:16:44.153171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.825 [2024-07-26 01:16:44.153197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.825 qpair failed and we were unable to recover it. 00:34:13.825 [2024-07-26 01:16:44.153344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.825 [2024-07-26 01:16:44.153369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.825 qpair failed and we were unable to recover it. 00:34:13.825 [2024-07-26 01:16:44.153472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.825 [2024-07-26 01:16:44.153497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.825 qpair failed and we were unable to recover it. 00:34:13.825 [2024-07-26 01:16:44.153635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.825 [2024-07-26 01:16:44.153661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.825 qpair failed and we were unable to recover it. 00:34:13.825 [2024-07-26 01:16:44.153800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.825 [2024-07-26 01:16:44.153826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.825 qpair failed and we were unable to recover it. 00:34:13.825 [2024-07-26 01:16:44.153953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.825 [2024-07-26 01:16:44.153978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.825 qpair failed and we were unable to recover it. 00:34:13.825 [2024-07-26 01:16:44.154120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.825 [2024-07-26 01:16:44.154146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.825 qpair failed and we were unable to recover it. 00:34:13.825 [2024-07-26 01:16:44.154288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.825 [2024-07-26 01:16:44.154313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.825 qpair failed and we were unable to recover it. 00:34:13.825 [2024-07-26 01:16:44.154446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.825 [2024-07-26 01:16:44.154471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.825 qpair failed and we were unable to recover it. 00:34:13.825 [2024-07-26 01:16:44.154599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.825 [2024-07-26 01:16:44.154624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.825 qpair failed and we were unable to recover it. 00:34:13.825 [2024-07-26 01:16:44.154740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.825 [2024-07-26 01:16:44.154765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.825 qpair failed and we were unable to recover it. 00:34:13.825 [2024-07-26 01:16:44.154898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.825 [2024-07-26 01:16:44.154923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.825 qpair failed and we were unable to recover it. 00:34:13.826 [2024-07-26 01:16:44.155064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.826 [2024-07-26 01:16:44.155090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.826 qpair failed and we were unable to recover it. 00:34:13.826 [2024-07-26 01:16:44.155206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.826 [2024-07-26 01:16:44.155231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.826 qpair failed and we were unable to recover it. 00:34:13.826 [2024-07-26 01:16:44.155337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.826 [2024-07-26 01:16:44.155362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.826 qpair failed and we were unable to recover it. 00:34:13.826 [2024-07-26 01:16:44.155505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.826 [2024-07-26 01:16:44.155530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.826 qpair failed and we were unable to recover it. 00:34:13.826 [2024-07-26 01:16:44.155638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.826 [2024-07-26 01:16:44.155663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.826 qpair failed and we were unable to recover it. 00:34:13.826 [2024-07-26 01:16:44.155816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.826 [2024-07-26 01:16:44.155841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.826 qpair failed and we were unable to recover it. 00:34:13.826 [2024-07-26 01:16:44.155968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.826 [2024-07-26 01:16:44.155993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.826 qpair failed and we were unable to recover it. 00:34:13.826 [2024-07-26 01:16:44.156156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.826 [2024-07-26 01:16:44.156182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.826 qpair failed and we were unable to recover it. 00:34:13.826 [2024-07-26 01:16:44.156287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.826 [2024-07-26 01:16:44.156312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.826 qpair failed and we were unable to recover it. 00:34:13.826 [2024-07-26 01:16:44.156422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.826 [2024-07-26 01:16:44.156447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.826 qpair failed and we were unable to recover it. 00:34:13.826 [2024-07-26 01:16:44.156573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.826 [2024-07-26 01:16:44.156598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.826 qpair failed and we were unable to recover it. 00:34:13.826 [2024-07-26 01:16:44.156701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.826 [2024-07-26 01:16:44.156726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.826 qpair failed and we were unable to recover it. 00:34:13.826 [2024-07-26 01:16:44.156827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.826 [2024-07-26 01:16:44.156852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.826 qpair failed and we were unable to recover it. 00:34:13.826 [2024-07-26 01:16:44.157010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.826 [2024-07-26 01:16:44.157035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.826 qpair failed and we were unable to recover it. 00:34:13.826 [2024-07-26 01:16:44.157181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.826 [2024-07-26 01:16:44.157208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.826 qpair failed and we were unable to recover it. 00:34:13.826 [2024-07-26 01:16:44.157317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.826 [2024-07-26 01:16:44.157343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.826 qpair failed and we were unable to recover it. 00:34:13.826 [2024-07-26 01:16:44.157480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.826 [2024-07-26 01:16:44.157505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.826 qpair failed and we were unable to recover it. 00:34:13.826 [2024-07-26 01:16:44.157643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.826 [2024-07-26 01:16:44.157668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.826 qpair failed and we were unable to recover it. 00:34:13.826 [2024-07-26 01:16:44.157771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.826 [2024-07-26 01:16:44.157796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.826 qpair failed and we were unable to recover it. 00:34:13.826 [2024-07-26 01:16:44.157903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.826 [2024-07-26 01:16:44.157928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.826 qpair failed and we were unable to recover it. 00:34:13.826 [2024-07-26 01:16:44.158084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.826 [2024-07-26 01:16:44.158123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.826 qpair failed and we were unable to recover it. 00:34:13.826 [2024-07-26 01:16:44.158240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.826 [2024-07-26 01:16:44.158267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.826 qpair failed and we were unable to recover it. 00:34:13.826 [2024-07-26 01:16:44.158381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.826 [2024-07-26 01:16:44.158406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.826 qpair failed and we were unable to recover it. 00:34:13.826 [2024-07-26 01:16:44.158534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.826 [2024-07-26 01:16:44.158559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.826 qpair failed and we were unable to recover it. 00:34:13.826 [2024-07-26 01:16:44.158697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.826 [2024-07-26 01:16:44.158724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.826 qpair failed and we were unable to recover it. 00:34:13.826 [2024-07-26 01:16:44.158865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.826 [2024-07-26 01:16:44.158890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.826 qpair failed and we were unable to recover it. 00:34:13.826 [2024-07-26 01:16:44.159024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.826 [2024-07-26 01:16:44.159051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.826 qpair failed and we were unable to recover it. 00:34:13.826 [2024-07-26 01:16:44.159220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.826 [2024-07-26 01:16:44.159247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.826 qpair failed and we were unable to recover it. 00:34:13.826 [2024-07-26 01:16:44.159363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.826 [2024-07-26 01:16:44.159388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.826 qpair failed and we were unable to recover it. 00:34:13.826 [2024-07-26 01:16:44.159516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.826 [2024-07-26 01:16:44.159541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.826 qpair failed and we were unable to recover it. 00:34:13.826 [2024-07-26 01:16:44.159681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.826 [2024-07-26 01:16:44.159706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.826 qpair failed and we were unable to recover it. 00:34:13.826 [2024-07-26 01:16:44.159844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.826 [2024-07-26 01:16:44.159869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.826 qpair failed and we were unable to recover it. 00:34:13.826 [2024-07-26 01:16:44.160027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.826 [2024-07-26 01:16:44.160053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.826 qpair failed and we were unable to recover it. 00:34:13.826 [2024-07-26 01:16:44.160209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.826 [2024-07-26 01:16:44.160234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.826 qpair failed and we were unable to recover it. 00:34:13.826 [2024-07-26 01:16:44.160366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.826 [2024-07-26 01:16:44.160391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.826 qpair failed and we were unable to recover it. 00:34:13.826 [2024-07-26 01:16:44.160496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.826 [2024-07-26 01:16:44.160523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.826 qpair failed and we were unable to recover it. 00:34:13.826 [2024-07-26 01:16:44.160625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.826 [2024-07-26 01:16:44.160651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.826 qpair failed and we were unable to recover it. 00:34:13.826 [2024-07-26 01:16:44.160781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.826 [2024-07-26 01:16:44.160807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.826 qpair failed and we were unable to recover it. 00:34:13.826 [2024-07-26 01:16:44.160949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.826 [2024-07-26 01:16:44.160974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.826 qpair failed and we were unable to recover it. 00:34:13.826 [2024-07-26 01:16:44.161113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.826 [2024-07-26 01:16:44.161139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.826 qpair failed and we were unable to recover it. 00:34:13.826 [2024-07-26 01:16:44.161253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.161278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.827 qpair failed and we were unable to recover it. 00:34:13.827 [2024-07-26 01:16:44.161434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.161460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.827 qpair failed and we were unable to recover it. 00:34:13.827 [2024-07-26 01:16:44.161625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.161651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.827 qpair failed and we were unable to recover it. 00:34:13.827 [2024-07-26 01:16:44.161761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.161786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.827 qpair failed and we were unable to recover it. 00:34:13.827 [2024-07-26 01:16:44.161925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.161951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.827 qpair failed and we were unable to recover it. 00:34:13.827 [2024-07-26 01:16:44.162092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.162118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.827 qpair failed and we were unable to recover it. 00:34:13.827 [2024-07-26 01:16:44.162259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.162285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.827 qpair failed and we were unable to recover it. 00:34:13.827 [2024-07-26 01:16:44.162426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.162452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.827 qpair failed and we were unable to recover it. 00:34:13.827 [2024-07-26 01:16:44.162584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.162608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.827 qpair failed and we were unable to recover it. 00:34:13.827 [2024-07-26 01:16:44.162720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.162747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.827 qpair failed and we were unable to recover it. 00:34:13.827 [2024-07-26 01:16:44.162880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.162905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.827 qpair failed and we were unable to recover it. 00:34:13.827 [2024-07-26 01:16:44.163015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.163040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.827 qpair failed and we were unable to recover it. 00:34:13.827 [2024-07-26 01:16:44.163218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.163244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.827 qpair failed and we were unable to recover it. 00:34:13.827 [2024-07-26 01:16:44.163351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.163376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.827 qpair failed and we were unable to recover it. 00:34:13.827 [2024-07-26 01:16:44.163514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.163540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.827 qpair failed and we were unable to recover it. 00:34:13.827 [2024-07-26 01:16:44.163651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.163677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.827 qpair failed and we were unable to recover it. 00:34:13.827 [2024-07-26 01:16:44.163812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.163843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.827 qpair failed and we were unable to recover it. 00:34:13.827 [2024-07-26 01:16:44.163978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.164004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.827 qpair failed and we were unable to recover it. 00:34:13.827 [2024-07-26 01:16:44.164146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.164173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.827 qpair failed and we were unable to recover it. 00:34:13.827 [2024-07-26 01:16:44.164289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.164316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.827 qpair failed and we were unable to recover it. 00:34:13.827 [2024-07-26 01:16:44.164446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.164472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.827 qpair failed and we were unable to recover it. 00:34:13.827 [2024-07-26 01:16:44.164634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.164660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.827 qpair failed and we were unable to recover it. 00:34:13.827 [2024-07-26 01:16:44.164762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.164788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.827 qpair failed and we were unable to recover it. 00:34:13.827 [2024-07-26 01:16:44.164891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.164917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.827 qpair failed and we were unable to recover it. 00:34:13.827 [2024-07-26 01:16:44.165025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.165051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.827 qpair failed and we were unable to recover it. 00:34:13.827 [2024-07-26 01:16:44.165199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.165224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.827 qpair failed and we were unable to recover it. 00:34:13.827 [2024-07-26 01:16:44.165393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.165418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.827 qpair failed and we were unable to recover it. 00:34:13.827 [2024-07-26 01:16:44.165532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.165558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.827 qpair failed and we were unable to recover it. 00:34:13.827 [2024-07-26 01:16:44.165708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.165746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.827 qpair failed and we were unable to recover it. 00:34:13.827 [2024-07-26 01:16:44.165861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.165887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.827 qpair failed and we were unable to recover it. 00:34:13.827 [2024-07-26 01:16:44.166046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.166085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.827 qpair failed and we were unable to recover it. 00:34:13.827 [2024-07-26 01:16:44.166220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.166246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.827 qpair failed and we were unable to recover it. 00:34:13.827 [2024-07-26 01:16:44.166376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.166401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.827 qpair failed and we were unable to recover it. 00:34:13.827 [2024-07-26 01:16:44.166543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.166568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.827 qpair failed and we were unable to recover it. 00:34:13.827 [2024-07-26 01:16:44.166700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.166728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.827 qpair failed and we were unable to recover it. 00:34:13.827 [2024-07-26 01:16:44.166837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.166863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.827 qpair failed and we were unable to recover it. 00:34:13.827 [2024-07-26 01:16:44.167001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.167027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.827 qpair failed and we were unable to recover it. 00:34:13.827 [2024-07-26 01:16:44.167185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.167212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.827 qpair failed and we were unable to recover it. 00:34:13.827 [2024-07-26 01:16:44.167322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.167349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.827 qpair failed and we were unable to recover it. 00:34:13.827 [2024-07-26 01:16:44.167487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.167513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.827 qpair failed and we were unable to recover it. 00:34:13.827 [2024-07-26 01:16:44.167620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.167646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.827 qpair failed and we were unable to recover it. 00:34:13.827 [2024-07-26 01:16:44.167776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.167801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.827 qpair failed and we were unable to recover it. 00:34:13.827 [2024-07-26 01:16:44.167919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.827 [2024-07-26 01:16:44.167945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.168053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.828 [2024-07-26 01:16:44.168087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.168248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.828 [2024-07-26 01:16:44.168274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.168416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.828 [2024-07-26 01:16:44.168441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.168573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.828 [2024-07-26 01:16:44.168599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.168757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.828 [2024-07-26 01:16:44.168783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.168922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.828 [2024-07-26 01:16:44.168947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.169056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.828 [2024-07-26 01:16:44.169088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.169220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.828 [2024-07-26 01:16:44.169245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.169365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.828 [2024-07-26 01:16:44.169391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.169498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.828 [2024-07-26 01:16:44.169524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.169630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.828 [2024-07-26 01:16:44.169656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.169791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.828 [2024-07-26 01:16:44.169818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.169952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.828 [2024-07-26 01:16:44.169979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.170140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.828 [2024-07-26 01:16:44.170167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.170320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.828 [2024-07-26 01:16:44.170346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.170479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.828 [2024-07-26 01:16:44.170506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.170647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.828 [2024-07-26 01:16:44.170674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.170810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.828 [2024-07-26 01:16:44.170836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.170944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.828 [2024-07-26 01:16:44.170971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.171116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.828 [2024-07-26 01:16:44.171144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.171250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.828 [2024-07-26 01:16:44.171276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.171398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.828 [2024-07-26 01:16:44.171423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.171554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.828 [2024-07-26 01:16:44.171579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.171713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.828 [2024-07-26 01:16:44.171738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.171841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.828 [2024-07-26 01:16:44.171866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.172001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.828 [2024-07-26 01:16:44.172027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.172198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.828 [2024-07-26 01:16:44.172224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.172339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.828 [2024-07-26 01:16:44.172368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.172504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.828 [2024-07-26 01:16:44.172531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.172668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.828 [2024-07-26 01:16:44.172694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.172800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.828 [2024-07-26 01:16:44.172825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.172984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.828 [2024-07-26 01:16:44.173009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.173155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.828 [2024-07-26 01:16:44.173182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.173320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.828 [2024-07-26 01:16:44.173346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.173446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.828 [2024-07-26 01:16:44.173472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.173634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.828 [2024-07-26 01:16:44.173660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.173769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.828 [2024-07-26 01:16:44.173796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.173930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.828 [2024-07-26 01:16:44.173956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.174096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.828 [2024-07-26 01:16:44.174122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.174253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.828 [2024-07-26 01:16:44.174279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.174422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.828 [2024-07-26 01:16:44.174453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.174597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.828 [2024-07-26 01:16:44.174624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.828 qpair failed and we were unable to recover it. 00:34:13.828 [2024-07-26 01:16:44.174762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.829 [2024-07-26 01:16:44.174789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.829 qpair failed and we were unable to recover it. 00:34:13.829 [2024-07-26 01:16:44.174894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.829 [2024-07-26 01:16:44.174919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.829 qpair failed and we were unable to recover it. 00:34:13.829 [2024-07-26 01:16:44.175038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.829 [2024-07-26 01:16:44.175069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.829 qpair failed and we were unable to recover it. 00:34:13.829 [2024-07-26 01:16:44.175206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.829 [2024-07-26 01:16:44.175232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.829 qpair failed and we were unable to recover it. 00:34:13.829 [2024-07-26 01:16:44.175364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.829 [2024-07-26 01:16:44.175389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.829 qpair failed and we were unable to recover it. 00:34:13.829 [2024-07-26 01:16:44.175526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.829 [2024-07-26 01:16:44.175552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.829 qpair failed and we were unable to recover it. 00:34:13.829 [2024-07-26 01:16:44.175665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.829 [2024-07-26 01:16:44.175691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.829 qpair failed and we were unable to recover it. 00:34:13.829 [2024-07-26 01:16:44.175806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.829 [2024-07-26 01:16:44.175831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.829 qpair failed and we were unable to recover it. 00:34:13.829 [2024-07-26 01:16:44.175944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.829 [2024-07-26 01:16:44.175970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.829 qpair failed and we were unable to recover it. 00:34:13.829 [2024-07-26 01:16:44.176108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.829 [2024-07-26 01:16:44.176136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.829 qpair failed and we were unable to recover it. 00:34:13.829 [2024-07-26 01:16:44.176251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.829 [2024-07-26 01:16:44.176276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.829 qpair failed and we were unable to recover it. 00:34:13.829 [2024-07-26 01:16:44.176415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.829 [2024-07-26 01:16:44.176441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.829 qpair failed and we were unable to recover it. 00:34:13.829 [2024-07-26 01:16:44.176581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.829 [2024-07-26 01:16:44.176606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.829 qpair failed and we were unable to recover it. 00:34:13.829 [2024-07-26 01:16:44.176721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.829 [2024-07-26 01:16:44.176746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.829 qpair failed and we were unable to recover it. 00:34:13.829 [2024-07-26 01:16:44.176882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.829 [2024-07-26 01:16:44.176908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.829 qpair failed and we were unable to recover it. 00:34:13.829 [2024-07-26 01:16:44.177048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.829 [2024-07-26 01:16:44.177080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.829 qpair failed and we were unable to recover it. 00:34:13.829 [2024-07-26 01:16:44.177214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.829 [2024-07-26 01:16:44.177240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.829 qpair failed and we were unable to recover it. 00:34:13.829 [2024-07-26 01:16:44.177344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.829 [2024-07-26 01:16:44.177369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.829 qpair failed and we were unable to recover it. 00:34:13.829 [2024-07-26 01:16:44.177500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.829 [2024-07-26 01:16:44.177525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.829 qpair failed and we were unable to recover it. 00:34:13.829 [2024-07-26 01:16:44.177682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.829 [2024-07-26 01:16:44.177707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.829 qpair failed and we were unable to recover it. 00:34:13.829 [2024-07-26 01:16:44.177845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.829 [2024-07-26 01:16:44.177871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.829 qpair failed and we were unable to recover it. 00:34:13.829 [2024-07-26 01:16:44.178027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.829 [2024-07-26 01:16:44.178052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.829 qpair failed and we were unable to recover it. 00:34:13.829 [2024-07-26 01:16:44.178181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.829 [2024-07-26 01:16:44.178206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.829 qpair failed and we were unable to recover it. 00:34:13.829 [2024-07-26 01:16:44.178339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.829 [2024-07-26 01:16:44.178364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.829 qpair failed and we were unable to recover it. 00:34:13.829 [2024-07-26 01:16:44.178492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.829 [2024-07-26 01:16:44.178517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.829 qpair failed and we were unable to recover it. 00:34:13.829 [2024-07-26 01:16:44.178627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.829 [2024-07-26 01:16:44.178657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.829 qpair failed and we were unable to recover it. 00:34:13.829 [2024-07-26 01:16:44.178817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.829 [2024-07-26 01:16:44.178843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.829 qpair failed and we were unable to recover it. 00:34:13.829 [2024-07-26 01:16:44.178947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.829 [2024-07-26 01:16:44.178972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.829 qpair failed and we were unable to recover it. 00:34:13.829 [2024-07-26 01:16:44.179106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.829 [2024-07-26 01:16:44.179132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.829 qpair failed and we were unable to recover it. 00:34:13.829 [2024-07-26 01:16:44.179246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.829 [2024-07-26 01:16:44.179272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.829 qpair failed and we were unable to recover it. 00:34:13.829 [2024-07-26 01:16:44.179408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.829 [2024-07-26 01:16:44.179434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.829 qpair failed and we were unable to recover it. 00:34:13.829 [2024-07-26 01:16:44.179547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.829 [2024-07-26 01:16:44.179572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.829 qpair failed and we were unable to recover it. 00:34:13.829 [2024-07-26 01:16:44.179703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.829 [2024-07-26 01:16:44.179728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.829 qpair failed and we were unable to recover it. 00:34:13.829 [2024-07-26 01:16:44.179857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.829 [2024-07-26 01:16:44.179882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.829 qpair failed and we were unable to recover it. 00:34:13.829 [2024-07-26 01:16:44.180019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.829 [2024-07-26 01:16:44.180044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.829 qpair failed and we were unable to recover it. 00:34:13.829 [2024-07-26 01:16:44.180200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.829 [2024-07-26 01:16:44.180225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.829 qpair failed and we were unable to recover it. 00:34:13.829 [2024-07-26 01:16:44.180333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.829 [2024-07-26 01:16:44.180358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.829 qpair failed and we were unable to recover it. 00:34:13.829 [2024-07-26 01:16:44.180465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.829 [2024-07-26 01:16:44.180494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.829 qpair failed and we were unable to recover it. 00:34:13.829 [2024-07-26 01:16:44.180603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.829 [2024-07-26 01:16:44.180629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.180762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.180788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.180891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.180916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.181024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.181049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.181166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.181191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.181326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.181351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.181513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.181539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.181695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.181720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.181820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.181845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.181987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.182013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.182144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.182169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.182329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.182354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.182520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.182545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.182647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.182672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.182821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.182846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.182954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.182979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.183114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.183140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.183262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.183287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.183402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.183427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.183522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.183547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.183662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.183687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.183800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.183825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.183953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.183978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.184113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.184138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.184240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.184265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.184405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.184430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.184530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.184555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.184654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.184679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.184800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.184844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.184987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.185014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.185148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.185174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.185309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.185337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.185508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.185534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.185647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.185673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.185836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.185861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.185966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.185992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.186143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.186170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.186332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.186358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.186490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.186515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.186682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.186709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.186817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.186845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.186956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.186982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.187121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.187148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.187255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.187280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.830 [2024-07-26 01:16:44.187417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.830 [2024-07-26 01:16:44.187443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.830 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.187581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.187606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.187735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.187760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.187905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.187930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.188035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.188068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.188230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.188256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.188358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.188383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.188493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.188520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.188624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.188649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.188777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.188804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.188906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.188932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.189069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.189105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.189238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.189265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.189444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.189470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.189628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.189654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.189811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.189837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.189999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.190026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.190201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.190228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.190354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.190379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.190522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.190548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.190657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.190683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.190812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.190838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.191005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.191030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.191179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.191205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.191316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.191341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.191492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.191518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.191676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.191702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.191837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.191863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.191998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.192024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.192172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.192197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.192331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.192356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.192523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.192549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.192664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.192689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.192797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.192822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.192932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.192957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.193090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.193122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.193249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.193274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.193403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.193428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.193535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.193564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.193664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.193689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.193798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.193823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.193950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.193974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.194087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.194113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.194233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.194259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.194388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.831 [2024-07-26 01:16:44.194413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.831 qpair failed and we were unable to recover it. 00:34:13.831 [2024-07-26 01:16:44.194538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.194563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.194697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.194722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.194824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.194849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.194950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.194975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.195110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.195135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.195244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.195270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.195434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.195459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.195625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.195650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.195812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.195837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.195974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.195999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.196102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.196128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.196234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.196259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.196363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.196388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.196550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.196575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.196707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.196732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.196869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.196894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.197025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.197050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.197205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.197230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.197341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.197365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.197478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.197504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.197616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.197645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.197779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.197804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.197931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.197957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.198056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.198098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.198246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.198271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.198402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.198427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.198566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.198591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.198698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.198723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.198835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.198860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.198990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.199029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.199151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.199178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.199316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.199342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.199480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.199506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.199623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.199648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.199813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.199839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.199982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.200008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.200188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.200214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.200330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.200356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.200520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.200547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.200681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.200706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.200825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.200850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.200981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.201007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.201152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.201177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.832 [2024-07-26 01:16:44.201322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.832 [2024-07-26 01:16:44.201348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.832 qpair failed and we were unable to recover it. 00:34:13.833 [2024-07-26 01:16:44.201485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.833 [2024-07-26 01:16:44.201511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.833 qpair failed and we were unable to recover it. 00:34:13.833 [2024-07-26 01:16:44.201669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.833 [2024-07-26 01:16:44.201695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.833 qpair failed and we were unable to recover it. 00:34:13.833 [2024-07-26 01:16:44.201797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.833 [2024-07-26 01:16:44.201822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.833 qpair failed and we were unable to recover it. 00:34:13.833 [2024-07-26 01:16:44.201929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.833 [2024-07-26 01:16:44.201959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.833 qpair failed and we were unable to recover it. 00:34:13.833 [2024-07-26 01:16:44.202096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.833 [2024-07-26 01:16:44.202127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.833 qpair failed and we were unable to recover it. 00:34:13.833 [2024-07-26 01:16:44.202268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.833 [2024-07-26 01:16:44.202293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:13.833 qpair failed and we were unable to recover it. 00:34:13.833 [2024-07-26 01:16:44.202431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.833 [2024-07-26 01:16:44.202458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.833 qpair failed and we were unable to recover it. 00:34:13.833 [2024-07-26 01:16:44.202588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.833 [2024-07-26 01:16:44.202614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.833 qpair failed and we were unable to recover it. 00:34:13.833 [2024-07-26 01:16:44.202750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.833 [2024-07-26 01:16:44.202776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.833 qpair failed and we were unable to recover it. 00:34:13.833 [2024-07-26 01:16:44.202900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.833 [2024-07-26 01:16:44.202926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.833 qpair failed and we were unable to recover it. 00:34:13.833 [2024-07-26 01:16:44.203068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.833 [2024-07-26 01:16:44.203094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.833 qpair failed and we were unable to recover it. 00:34:13.833 [2024-07-26 01:16:44.203198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.833 [2024-07-26 01:16:44.203223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.833 qpair failed and we were unable to recover it. 00:34:13.833 [2024-07-26 01:16:44.203356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.833 [2024-07-26 01:16:44.203383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.833 qpair failed and we were unable to recover it. 00:34:13.833 [2024-07-26 01:16:44.203496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.833 [2024-07-26 01:16:44.203521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.833 qpair failed and we were unable to recover it. 00:34:13.833 [2024-07-26 01:16:44.203656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.833 [2024-07-26 01:16:44.203683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.833 qpair failed and we were unable to recover it. 00:34:13.833 [2024-07-26 01:16:44.203819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.833 [2024-07-26 01:16:44.203844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.833 qpair failed and we were unable to recover it. 00:34:13.833 [2024-07-26 01:16:44.203973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.833 [2024-07-26 01:16:44.203998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.833 qpair failed and we were unable to recover it. 00:34:13.833 [2024-07-26 01:16:44.204115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.833 [2024-07-26 01:16:44.204142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.833 qpair failed and we were unable to recover it. 00:34:13.833 [2024-07-26 01:16:44.204307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.833 [2024-07-26 01:16:44.204332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.833 qpair failed and we were unable to recover it. 00:34:13.833 [2024-07-26 01:16:44.204451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.833 [2024-07-26 01:16:44.204477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.833 qpair failed and we were unable to recover it. 00:34:13.833 [2024-07-26 01:16:44.204615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.833 [2024-07-26 01:16:44.204640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.833 qpair failed and we were unable to recover it. 00:34:13.833 [2024-07-26 01:16:44.204750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.833 [2024-07-26 01:16:44.204774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:13.833 qpair failed and we were unable to recover it. 00:34:14.117 [2024-07-26 01:16:44.204884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.117 [2024-07-26 01:16:44.204909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.117 qpair failed and we were unable to recover it. 00:34:14.117 [2024-07-26 01:16:44.205024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.117 [2024-07-26 01:16:44.205050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.117 qpair failed and we were unable to recover it. 00:34:14.117 [2024-07-26 01:16:44.205193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.117 [2024-07-26 01:16:44.205219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.117 qpair failed and we were unable to recover it. 00:34:14.117 [2024-07-26 01:16:44.205324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.117 [2024-07-26 01:16:44.205349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.117 qpair failed and we were unable to recover it. 00:34:14.117 [2024-07-26 01:16:44.205490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.117 [2024-07-26 01:16:44.205516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.117 qpair failed and we were unable to recover it. 00:34:14.117 [2024-07-26 01:16:44.205652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.117 [2024-07-26 01:16:44.205677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.117 qpair failed and we were unable to recover it. 00:34:14.117 [2024-07-26 01:16:44.205790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.117 [2024-07-26 01:16:44.205814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.117 qpair failed and we were unable to recover it. 00:34:14.117 [2024-07-26 01:16:44.205917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.117 [2024-07-26 01:16:44.205943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.117 qpair failed and we were unable to recover it. 00:34:14.117 [2024-07-26 01:16:44.206099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.117 [2024-07-26 01:16:44.206137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.117 qpair failed and we were unable to recover it. 00:34:14.117 [2024-07-26 01:16:44.206258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.117 [2024-07-26 01:16:44.206285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.117 qpair failed and we were unable to recover it. 00:34:14.117 [2024-07-26 01:16:44.206398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.117 [2024-07-26 01:16:44.206424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.117 qpair failed and we were unable to recover it. 00:34:14.117 [2024-07-26 01:16:44.206530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.117 [2024-07-26 01:16:44.206555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.117 qpair failed and we were unable to recover it. 00:34:14.117 [2024-07-26 01:16:44.206664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.117 [2024-07-26 01:16:44.206689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.117 qpair failed and we were unable to recover it. 00:34:14.117 [2024-07-26 01:16:44.206798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.117 [2024-07-26 01:16:44.206823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.117 qpair failed and we were unable to recover it. 00:34:14.117 [2024-07-26 01:16:44.206954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.117 [2024-07-26 01:16:44.206979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.117 qpair failed and we were unable to recover it. 00:34:14.117 [2024-07-26 01:16:44.207081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.117 [2024-07-26 01:16:44.207107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.117 qpair failed and we were unable to recover it. 00:34:14.117 [2024-07-26 01:16:44.207228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.117 [2024-07-26 01:16:44.207254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.117 qpair failed and we were unable to recover it. 00:34:14.117 [2024-07-26 01:16:44.207391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.117 [2024-07-26 01:16:44.207416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.117 qpair failed and we were unable to recover it. 00:34:14.117 [2024-07-26 01:16:44.207550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.117 [2024-07-26 01:16:44.207576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.117 qpair failed and we were unable to recover it. 00:34:14.118 [2024-07-26 01:16:44.207690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.118 [2024-07-26 01:16:44.207717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.118 qpair failed and we were unable to recover it. 00:34:14.118 [2024-07-26 01:16:44.207853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.118 [2024-07-26 01:16:44.207880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.118 qpair failed and we were unable to recover it. 00:34:14.118 [2024-07-26 01:16:44.208022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.118 [2024-07-26 01:16:44.208047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.118 qpair failed and we were unable to recover it. 00:34:14.118 [2024-07-26 01:16:44.208174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.118 [2024-07-26 01:16:44.208201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.118 qpair failed and we were unable to recover it. 00:34:14.118 [2024-07-26 01:16:44.208313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.118 [2024-07-26 01:16:44.208337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.118 qpair failed and we were unable to recover it. 00:34:14.118 [2024-07-26 01:16:44.208441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.118 [2024-07-26 01:16:44.208466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.118 qpair failed and we were unable to recover it. 00:34:14.118 [2024-07-26 01:16:44.208591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.118 [2024-07-26 01:16:44.208615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.118 qpair failed and we were unable to recover it. 00:34:14.118 [2024-07-26 01:16:44.208746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.118 [2024-07-26 01:16:44.208773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.118 qpair failed and we were unable to recover it. 00:34:14.118 [2024-07-26 01:16:44.208880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.118 [2024-07-26 01:16:44.208905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.118 qpair failed and we were unable to recover it. 00:34:14.118 [2024-07-26 01:16:44.209010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.118 [2024-07-26 01:16:44.209035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.118 qpair failed and we were unable to recover it. 00:34:14.118 [2024-07-26 01:16:44.209169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.118 [2024-07-26 01:16:44.209195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.118 qpair failed and we were unable to recover it. 00:34:14.118 [2024-07-26 01:16:44.209299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.118 [2024-07-26 01:16:44.209324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.118 qpair failed and we were unable to recover it. 00:34:14.118 [2024-07-26 01:16:44.209432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.118 [2024-07-26 01:16:44.209457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.118 qpair failed and we were unable to recover it. 00:34:14.118 [2024-07-26 01:16:44.209595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.118 [2024-07-26 01:16:44.209622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.118 qpair failed and we were unable to recover it. 00:34:14.118 [2024-07-26 01:16:44.209754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.118 [2024-07-26 01:16:44.209780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.118 qpair failed and we were unable to recover it. 00:34:14.118 [2024-07-26 01:16:44.209936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.118 [2024-07-26 01:16:44.209961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.118 qpair failed and we were unable to recover it. 00:34:14.118 [2024-07-26 01:16:44.210087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.118 [2024-07-26 01:16:44.210115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.118 qpair failed and we were unable to recover it. 00:34:14.118 [2024-07-26 01:16:44.210285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.118 [2024-07-26 01:16:44.210310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.118 qpair failed and we were unable to recover it. 00:34:14.118 [2024-07-26 01:16:44.210443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.118 [2024-07-26 01:16:44.210468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.118 qpair failed and we were unable to recover it. 00:34:14.118 [2024-07-26 01:16:44.210579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.118 [2024-07-26 01:16:44.210605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.118 qpair failed and we were unable to recover it. 00:34:14.118 [2024-07-26 01:16:44.210731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.118 [2024-07-26 01:16:44.210756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.118 qpair failed and we were unable to recover it. 00:34:14.118 [2024-07-26 01:16:44.210859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.118 [2024-07-26 01:16:44.210885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.118 qpair failed and we were unable to recover it. 00:34:14.118 [2024-07-26 01:16:44.211020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.118 [2024-07-26 01:16:44.211046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.118 qpair failed and we were unable to recover it. 00:34:14.118 [2024-07-26 01:16:44.211190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.118 [2024-07-26 01:16:44.211215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.118 qpair failed and we were unable to recover it. 00:34:14.118 [2024-07-26 01:16:44.211319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.118 [2024-07-26 01:16:44.211344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.118 qpair failed and we were unable to recover it. 00:34:14.118 [2024-07-26 01:16:44.211480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.118 [2024-07-26 01:16:44.211506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.118 qpair failed and we were unable to recover it. 00:34:14.118 [2024-07-26 01:16:44.211614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.118 [2024-07-26 01:16:44.211640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.118 qpair failed and we were unable to recover it. 00:34:14.118 [2024-07-26 01:16:44.211776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.118 [2024-07-26 01:16:44.211801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.118 qpair failed and we were unable to recover it. 00:34:14.118 [2024-07-26 01:16:44.211903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.118 [2024-07-26 01:16:44.211928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.118 qpair failed and we were unable to recover it. 00:34:14.118 [2024-07-26 01:16:44.212068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.118 [2024-07-26 01:16:44.212099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.118 qpair failed and we were unable to recover it. 00:34:14.118 [2024-07-26 01:16:44.212233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.118 [2024-07-26 01:16:44.212259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.118 qpair failed and we were unable to recover it. 00:34:14.118 [2024-07-26 01:16:44.212429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.118 [2024-07-26 01:16:44.212456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.118 qpair failed and we were unable to recover it. 00:34:14.118 [2024-07-26 01:16:44.212595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.118 [2024-07-26 01:16:44.212621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.118 qpair failed and we were unable to recover it. 00:34:14.118 [2024-07-26 01:16:44.212780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.118 [2024-07-26 01:16:44.212805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.118 qpair failed and we were unable to recover it. 00:34:14.118 [2024-07-26 01:16:44.212921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.118 [2024-07-26 01:16:44.212947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.118 qpair failed and we were unable to recover it. 00:34:14.118 [2024-07-26 01:16:44.213109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.118 [2024-07-26 01:16:44.213136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.118 qpair failed and we were unable to recover it. 00:34:14.118 [2024-07-26 01:16:44.213264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.118 [2024-07-26 01:16:44.213289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.118 qpair failed and we were unable to recover it. 00:34:14.118 [2024-07-26 01:16:44.213429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.119 [2024-07-26 01:16:44.213456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.119 qpair failed and we were unable to recover it. 00:34:14.119 [2024-07-26 01:16:44.213591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.119 [2024-07-26 01:16:44.213616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.119 qpair failed and we were unable to recover it. 00:34:14.119 [2024-07-26 01:16:44.213748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.119 [2024-07-26 01:16:44.213774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.119 qpair failed and we were unable to recover it. 00:34:14.119 [2024-07-26 01:16:44.213935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.119 [2024-07-26 01:16:44.213961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.119 qpair failed and we were unable to recover it. 00:34:14.119 [2024-07-26 01:16:44.214088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.119 [2024-07-26 01:16:44.214114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.119 qpair failed and we were unable to recover it. 00:34:14.119 [2024-07-26 01:16:44.214219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.119 [2024-07-26 01:16:44.214243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.119 qpair failed and we were unable to recover it. 00:34:14.119 [2024-07-26 01:16:44.214361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.119 [2024-07-26 01:16:44.214387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.119 qpair failed and we were unable to recover it. 00:34:14.119 [2024-07-26 01:16:44.214521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.119 [2024-07-26 01:16:44.214546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.119 qpair failed and we were unable to recover it. 00:34:14.119 [2024-07-26 01:16:44.214680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.119 [2024-07-26 01:16:44.214706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.119 qpair failed and we were unable to recover it. 00:34:14.119 [2024-07-26 01:16:44.214821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.119 [2024-07-26 01:16:44.214847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.119 qpair failed and we were unable to recover it. 00:34:14.119 [2024-07-26 01:16:44.214984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.119 [2024-07-26 01:16:44.215009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.119 qpair failed and we were unable to recover it. 00:34:14.119 [2024-07-26 01:16:44.215155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.119 [2024-07-26 01:16:44.215180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.119 qpair failed and we were unable to recover it. 00:34:14.119 [2024-07-26 01:16:44.215339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.119 [2024-07-26 01:16:44.215364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.119 qpair failed and we were unable to recover it. 00:34:14.119 [2024-07-26 01:16:44.215497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.119 [2024-07-26 01:16:44.215523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.119 qpair failed and we were unable to recover it. 00:34:14.119 [2024-07-26 01:16:44.215634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.119 [2024-07-26 01:16:44.215658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.119 qpair failed and we were unable to recover it. 00:34:14.119 [2024-07-26 01:16:44.215826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.119 [2024-07-26 01:16:44.215852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.119 qpair failed and we were unable to recover it. 00:34:14.119 [2024-07-26 01:16:44.216015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.119 [2024-07-26 01:16:44.216040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.119 qpair failed and we were unable to recover it. 00:34:14.119 [2024-07-26 01:16:44.216154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.119 [2024-07-26 01:16:44.216179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.119 qpair failed and we were unable to recover it. 00:34:14.119 [2024-07-26 01:16:44.216317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.119 [2024-07-26 01:16:44.216342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.119 qpair failed and we were unable to recover it. 00:34:14.119 [2024-07-26 01:16:44.216466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.119 [2024-07-26 01:16:44.216507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.119 qpair failed and we were unable to recover it. 00:34:14.119 [2024-07-26 01:16:44.216653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.119 [2024-07-26 01:16:44.216680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.119 qpair failed and we were unable to recover it. 00:34:14.119 [2024-07-26 01:16:44.216817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.119 [2024-07-26 01:16:44.216843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.119 qpair failed and we were unable to recover it. 00:34:14.119 [2024-07-26 01:16:44.216973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.119 [2024-07-26 01:16:44.216998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.119 qpair failed and we were unable to recover it. 00:34:14.119 [2024-07-26 01:16:44.217137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.119 [2024-07-26 01:16:44.217172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.119 qpair failed and we were unable to recover it. 00:34:14.119 [2024-07-26 01:16:44.217311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.119 [2024-07-26 01:16:44.217337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.119 qpair failed and we were unable to recover it. 00:34:14.119 [2024-07-26 01:16:44.217468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.119 [2024-07-26 01:16:44.217493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.119 qpair failed and we were unable to recover it. 00:34:14.119 [2024-07-26 01:16:44.217625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.119 [2024-07-26 01:16:44.217650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.119 qpair failed and we were unable to recover it. 00:34:14.119 [2024-07-26 01:16:44.217784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.119 [2024-07-26 01:16:44.217809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.119 qpair failed and we were unable to recover it. 00:34:14.119 [2024-07-26 01:16:44.217950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.119 [2024-07-26 01:16:44.217975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.119 qpair failed and we were unable to recover it. 00:34:14.119 [2024-07-26 01:16:44.218143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.119 [2024-07-26 01:16:44.218174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.119 qpair failed and we were unable to recover it. 00:34:14.119 [2024-07-26 01:16:44.218285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.119 [2024-07-26 01:16:44.218311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.119 qpair failed and we were unable to recover it. 00:34:14.119 [2024-07-26 01:16:44.218472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.119 [2024-07-26 01:16:44.218497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.119 qpair failed and we were unable to recover it. 00:34:14.119 [2024-07-26 01:16:44.218606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.119 [2024-07-26 01:16:44.218631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.119 qpair failed and we were unable to recover it. 00:34:14.119 [2024-07-26 01:16:44.218770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.119 [2024-07-26 01:16:44.218795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.119 qpair failed and we were unable to recover it. 00:34:14.119 [2024-07-26 01:16:44.218924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.119 [2024-07-26 01:16:44.218949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.119 qpair failed and we were unable to recover it. 00:34:14.119 [2024-07-26 01:16:44.219053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.119 [2024-07-26 01:16:44.219088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.119 qpair failed and we were unable to recover it. 00:34:14.119 [2024-07-26 01:16:44.219199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.119 [2024-07-26 01:16:44.219224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.119 qpair failed and we were unable to recover it. 00:34:14.119 [2024-07-26 01:16:44.219364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.120 [2024-07-26 01:16:44.219389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.120 qpair failed and we were unable to recover it. 00:34:14.120 [2024-07-26 01:16:44.219545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.120 [2024-07-26 01:16:44.219571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.120 qpair failed and we were unable to recover it. 00:34:14.120 [2024-07-26 01:16:44.219680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.120 [2024-07-26 01:16:44.219705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.120 qpair failed and we were unable to recover it. 00:34:14.120 [2024-07-26 01:16:44.219813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.120 [2024-07-26 01:16:44.219838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.120 qpair failed and we were unable to recover it. 00:34:14.120 [2024-07-26 01:16:44.219942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.120 [2024-07-26 01:16:44.219967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.120 qpair failed and we were unable to recover it. 00:34:14.120 [2024-07-26 01:16:44.220079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.120 [2024-07-26 01:16:44.220114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.120 qpair failed and we were unable to recover it. 00:34:14.120 [2024-07-26 01:16:44.220215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.120 [2024-07-26 01:16:44.220239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.120 qpair failed and we were unable to recover it. 00:34:14.120 [2024-07-26 01:16:44.220369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.120 [2024-07-26 01:16:44.220394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.120 qpair failed and we were unable to recover it. 00:34:14.120 [2024-07-26 01:16:44.220557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.120 [2024-07-26 01:16:44.220583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.120 qpair failed and we were unable to recover it. 00:34:14.120 [2024-07-26 01:16:44.220720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.120 [2024-07-26 01:16:44.220749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.120 qpair failed and we were unable to recover it. 00:34:14.120 [2024-07-26 01:16:44.220855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.120 [2024-07-26 01:16:44.220880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.120 qpair failed and we were unable to recover it. 00:34:14.120 [2024-07-26 01:16:44.221043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.120 [2024-07-26 01:16:44.221076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.120 qpair failed and we were unable to recover it. 00:34:14.120 [2024-07-26 01:16:44.221190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.120 [2024-07-26 01:16:44.221215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.120 qpair failed and we were unable to recover it. 00:34:14.120 [2024-07-26 01:16:44.221317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.120 [2024-07-26 01:16:44.221342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.120 qpair failed and we were unable to recover it. 00:34:14.120 [2024-07-26 01:16:44.221489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.120 [2024-07-26 01:16:44.221514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.120 qpair failed and we were unable to recover it. 00:34:14.120 [2024-07-26 01:16:44.221622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.120 [2024-07-26 01:16:44.221647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.120 qpair failed and we were unable to recover it. 00:34:14.120 [2024-07-26 01:16:44.221748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.120 [2024-07-26 01:16:44.221772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.120 qpair failed and we were unable to recover it. 00:34:14.120 [2024-07-26 01:16:44.221892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.120 [2024-07-26 01:16:44.221919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.120 qpair failed and we were unable to recover it. 00:34:14.120 [2024-07-26 01:16:44.222047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.120 [2024-07-26 01:16:44.222080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.120 qpair failed and we were unable to recover it. 00:34:14.120 [2024-07-26 01:16:44.222213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.120 [2024-07-26 01:16:44.222239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.120 qpair failed and we were unable to recover it. 00:34:14.120 [2024-07-26 01:16:44.222372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.120 [2024-07-26 01:16:44.222398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.120 qpair failed and we were unable to recover it. 00:34:14.120 [2024-07-26 01:16:44.222527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.120 [2024-07-26 01:16:44.222552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.120 qpair failed and we were unable to recover it. 00:34:14.120 [2024-07-26 01:16:44.222659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.120 [2024-07-26 01:16:44.222684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.120 qpair failed and we were unable to recover it. 00:34:14.120 [2024-07-26 01:16:44.222850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.120 [2024-07-26 01:16:44.222876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.120 qpair failed and we were unable to recover it. 00:34:14.120 [2024-07-26 01:16:44.223038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.120 [2024-07-26 01:16:44.223068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.120 qpair failed and we were unable to recover it. 00:34:14.120 [2024-07-26 01:16:44.223208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.120 [2024-07-26 01:16:44.223233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.120 qpair failed and we were unable to recover it. 00:34:14.120 [2024-07-26 01:16:44.223334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.120 [2024-07-26 01:16:44.223359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.120 qpair failed and we were unable to recover it. 00:34:14.120 [2024-07-26 01:16:44.223496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.120 [2024-07-26 01:16:44.223522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.120 qpair failed and we were unable to recover it. 00:34:14.120 [2024-07-26 01:16:44.223683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.120 [2024-07-26 01:16:44.223708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.120 qpair failed and we were unable to recover it. 00:34:14.120 [2024-07-26 01:16:44.223819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.120 [2024-07-26 01:16:44.223844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.120 qpair failed and we were unable to recover it. 00:34:14.120 [2024-07-26 01:16:44.223963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.120 [2024-07-26 01:16:44.224003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.120 qpair failed and we were unable to recover it. 00:34:14.120 [2024-07-26 01:16:44.224129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.120 [2024-07-26 01:16:44.224157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.120 qpair failed and we were unable to recover it. 00:34:14.120 [2024-07-26 01:16:44.224301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.120 [2024-07-26 01:16:44.224327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.120 qpair failed and we were unable to recover it. 00:34:14.120 [2024-07-26 01:16:44.224493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.120 [2024-07-26 01:16:44.224519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.120 qpair failed and we were unable to recover it. 00:34:14.120 [2024-07-26 01:16:44.224622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.120 [2024-07-26 01:16:44.224647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.120 qpair failed and we were unable to recover it. 00:34:14.120 [2024-07-26 01:16:44.224782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.120 [2024-07-26 01:16:44.224807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.120 qpair failed and we were unable to recover it. 00:34:14.120 [2024-07-26 01:16:44.224943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.120 [2024-07-26 01:16:44.224975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.120 qpair failed and we were unable to recover it. 00:34:14.120 [2024-07-26 01:16:44.225117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.121 [2024-07-26 01:16:44.225143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.121 qpair failed and we were unable to recover it. 00:34:14.121 [2024-07-26 01:16:44.225257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.121 [2024-07-26 01:16:44.225283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.121 qpair failed and we were unable to recover it. 00:34:14.121 [2024-07-26 01:16:44.225416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.121 [2024-07-26 01:16:44.225442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.121 qpair failed and we were unable to recover it. 00:34:14.121 [2024-07-26 01:16:44.225544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.121 [2024-07-26 01:16:44.225569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.121 qpair failed and we were unable to recover it. 00:34:14.121 [2024-07-26 01:16:44.225738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.121 [2024-07-26 01:16:44.225764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.121 qpair failed and we were unable to recover it. 00:34:14.121 [2024-07-26 01:16:44.225869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.121 [2024-07-26 01:16:44.225893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.121 qpair failed and we were unable to recover it. 00:34:14.121 [2024-07-26 01:16:44.226006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.121 [2024-07-26 01:16:44.226032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.121 qpair failed and we were unable to recover it. 00:34:14.121 [2024-07-26 01:16:44.226153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.121 [2024-07-26 01:16:44.226179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.121 qpair failed and we were unable to recover it. 00:34:14.121 [2024-07-26 01:16:44.226308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.121 [2024-07-26 01:16:44.226334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.121 qpair failed and we were unable to recover it. 00:34:14.121 [2024-07-26 01:16:44.226438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.121 [2024-07-26 01:16:44.226463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.121 qpair failed and we were unable to recover it. 00:34:14.121 [2024-07-26 01:16:44.226576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.121 [2024-07-26 01:16:44.226602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.121 qpair failed and we were unable to recover it. 00:34:14.121 [2024-07-26 01:16:44.226708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.121 [2024-07-26 01:16:44.226733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.121 qpair failed and we were unable to recover it. 00:34:14.121 [2024-07-26 01:16:44.226888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.121 [2024-07-26 01:16:44.226914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.121 qpair failed and we were unable to recover it. 00:34:14.121 [2024-07-26 01:16:44.227095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.121 [2024-07-26 01:16:44.227122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.121 qpair failed and we were unable to recover it. 00:34:14.121 [2024-07-26 01:16:44.227244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.121 [2024-07-26 01:16:44.227269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.121 qpair failed and we were unable to recover it. 00:34:14.121 [2024-07-26 01:16:44.227377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.121 [2024-07-26 01:16:44.227403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.121 qpair failed and we were unable to recover it. 00:34:14.121 [2024-07-26 01:16:44.227545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.121 [2024-07-26 01:16:44.227572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.121 qpair failed and we were unable to recover it. 00:34:14.121 [2024-07-26 01:16:44.227682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.121 [2024-07-26 01:16:44.227708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.121 qpair failed and we were unable to recover it. 00:34:14.121 [2024-07-26 01:16:44.227840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.121 [2024-07-26 01:16:44.227866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.121 qpair failed and we were unable to recover it. 00:34:14.121 [2024-07-26 01:16:44.228002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.121 [2024-07-26 01:16:44.228029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.121 qpair failed and we were unable to recover it. 00:34:14.121 [2024-07-26 01:16:44.228161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.121 [2024-07-26 01:16:44.228187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.121 qpair failed and we were unable to recover it. 00:34:14.121 [2024-07-26 01:16:44.228323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.121 [2024-07-26 01:16:44.228348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.121 qpair failed and we were unable to recover it. 00:34:14.121 [2024-07-26 01:16:44.228486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.121 [2024-07-26 01:16:44.228511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.121 qpair failed and we were unable to recover it. 00:34:14.121 [2024-07-26 01:16:44.228624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.121 [2024-07-26 01:16:44.228650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.121 qpair failed and we were unable to recover it. 00:34:14.121 [2024-07-26 01:16:44.228767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.121 [2024-07-26 01:16:44.228792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.121 qpair failed and we were unable to recover it. 00:34:14.121 [2024-07-26 01:16:44.228952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.121 [2024-07-26 01:16:44.228978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.121 qpair failed and we were unable to recover it. 00:34:14.121 [2024-07-26 01:16:44.229101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.121 [2024-07-26 01:16:44.229139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.121 qpair failed and we were unable to recover it. 00:34:14.121 [2024-07-26 01:16:44.229265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.121 [2024-07-26 01:16:44.229291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.121 qpair failed and we were unable to recover it. 00:34:14.121 [2024-07-26 01:16:44.229428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.121 [2024-07-26 01:16:44.229454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.121 qpair failed and we were unable to recover it. 00:34:14.121 [2024-07-26 01:16:44.229594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.121 [2024-07-26 01:16:44.229618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.121 qpair failed and we were unable to recover it. 00:34:14.121 [2024-07-26 01:16:44.229725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.121 [2024-07-26 01:16:44.229750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.121 qpair failed and we were unable to recover it. 00:34:14.121 [2024-07-26 01:16:44.229886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.121 [2024-07-26 01:16:44.229912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.121 qpair failed and we were unable to recover it. 00:34:14.121 [2024-07-26 01:16:44.230021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.121 [2024-07-26 01:16:44.230046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.121 qpair failed and we were unable to recover it. 00:34:14.121 [2024-07-26 01:16:44.230179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.121 [2024-07-26 01:16:44.230205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.121 qpair failed and we were unable to recover it. 00:34:14.121 [2024-07-26 01:16:44.230345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.121 [2024-07-26 01:16:44.230370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.121 qpair failed and we were unable to recover it. 00:34:14.121 [2024-07-26 01:16:44.230524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.121 [2024-07-26 01:16:44.230550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.121 qpair failed and we were unable to recover it. 00:34:14.121 [2024-07-26 01:16:44.230690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.121 [2024-07-26 01:16:44.230715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.121 qpair failed and we were unable to recover it. 00:34:14.121 [2024-07-26 01:16:44.230826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.121 [2024-07-26 01:16:44.230851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.121 qpair failed and we were unable to recover it. 00:34:14.122 [2024-07-26 01:16:44.230966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.122 [2024-07-26 01:16:44.230993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.122 qpair failed and we were unable to recover it. 00:34:14.122 [2024-07-26 01:16:44.231137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.122 [2024-07-26 01:16:44.231164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.122 qpair failed and we were unable to recover it. 00:34:14.122 [2024-07-26 01:16:44.231274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.122 [2024-07-26 01:16:44.231299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.122 qpair failed and we were unable to recover it. 00:34:14.122 [2024-07-26 01:16:44.231463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.122 [2024-07-26 01:16:44.231488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.122 qpair failed and we were unable to recover it. 00:34:14.122 [2024-07-26 01:16:44.231600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.122 [2024-07-26 01:16:44.231625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.122 qpair failed and we were unable to recover it. 00:34:14.122 [2024-07-26 01:16:44.231772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.122 [2024-07-26 01:16:44.231798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.122 qpair failed and we were unable to recover it. 00:34:14.122 [2024-07-26 01:16:44.231929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.122 [2024-07-26 01:16:44.231954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.122 qpair failed and we were unable to recover it. 00:34:14.122 [2024-07-26 01:16:44.232095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.122 [2024-07-26 01:16:44.232120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.122 qpair failed and we were unable to recover it. 00:34:14.122 [2024-07-26 01:16:44.232229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.122 [2024-07-26 01:16:44.232255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.122 qpair failed and we were unable to recover it. 00:34:14.122 [2024-07-26 01:16:44.232417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.122 [2024-07-26 01:16:44.232442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.122 qpair failed and we were unable to recover it. 00:34:14.122 [2024-07-26 01:16:44.232579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.122 [2024-07-26 01:16:44.232605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.122 qpair failed and we were unable to recover it. 00:34:14.122 [2024-07-26 01:16:44.232713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.122 [2024-07-26 01:16:44.232739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.122 qpair failed and we were unable to recover it. 00:34:14.122 [2024-07-26 01:16:44.232894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.122 [2024-07-26 01:16:44.232919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.122 qpair failed and we were unable to recover it. 00:34:14.122 [2024-07-26 01:16:44.233052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.122 [2024-07-26 01:16:44.233084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.122 qpair failed and we were unable to recover it. 00:34:14.122 [2024-07-26 01:16:44.233217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.122 [2024-07-26 01:16:44.233242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.122 qpair failed and we were unable to recover it. 00:34:14.122 [2024-07-26 01:16:44.233382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.122 [2024-07-26 01:16:44.233412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.122 qpair failed and we were unable to recover it. 00:34:14.122 [2024-07-26 01:16:44.233544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.122 [2024-07-26 01:16:44.233569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.122 qpair failed and we were unable to recover it. 00:34:14.122 [2024-07-26 01:16:44.233742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.122 [2024-07-26 01:16:44.233767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.122 qpair failed and we were unable to recover it. 00:34:14.122 [2024-07-26 01:16:44.233897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.122 [2024-07-26 01:16:44.233922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.122 qpair failed and we were unable to recover it. 00:34:14.122 [2024-07-26 01:16:44.234032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.122 [2024-07-26 01:16:44.234057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.122 qpair failed and we were unable to recover it. 00:34:14.122 [2024-07-26 01:16:44.234206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.122 [2024-07-26 01:16:44.234232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.122 qpair failed and we were unable to recover it. 00:34:14.122 [2024-07-26 01:16:44.234361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.122 [2024-07-26 01:16:44.234386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.122 qpair failed and we were unable to recover it. 00:34:14.122 [2024-07-26 01:16:44.234547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.122 [2024-07-26 01:16:44.234572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.122 qpair failed and we were unable to recover it. 00:34:14.122 [2024-07-26 01:16:44.234673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.122 [2024-07-26 01:16:44.234698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.122 qpair failed and we were unable to recover it. 00:34:14.122 [2024-07-26 01:16:44.234829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.122 [2024-07-26 01:16:44.234854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.122 qpair failed and we were unable to recover it. 00:34:14.122 [2024-07-26 01:16:44.234958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.122 [2024-07-26 01:16:44.234983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.122 qpair failed and we were unable to recover it. 00:34:14.122 [2024-07-26 01:16:44.235308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.122 [2024-07-26 01:16:44.235335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.122 qpair failed and we were unable to recover it. 00:34:14.122 [2024-07-26 01:16:44.235475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.122 [2024-07-26 01:16:44.235500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.122 qpair failed and we were unable to recover it. 00:34:14.122 [2024-07-26 01:16:44.235661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.122 [2024-07-26 01:16:44.235686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.122 qpair failed and we were unable to recover it. 00:34:14.122 [2024-07-26 01:16:44.235854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.122 [2024-07-26 01:16:44.235880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.122 qpair failed and we were unable to recover it. 00:34:14.122 [2024-07-26 01:16:44.236011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.122 [2024-07-26 01:16:44.236036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.122 qpair failed and we were unable to recover it. 00:34:14.122 [2024-07-26 01:16:44.236192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.122 [2024-07-26 01:16:44.236218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.122 qpair failed and we were unable to recover it. 00:34:14.123 [2024-07-26 01:16:44.236356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.123 [2024-07-26 01:16:44.236382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.123 qpair failed and we were unable to recover it. 00:34:14.123 [2024-07-26 01:16:44.236495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.123 [2024-07-26 01:16:44.236520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.123 qpair failed and we were unable to recover it. 00:34:14.123 [2024-07-26 01:16:44.236656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.123 [2024-07-26 01:16:44.236683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.123 qpair failed and we were unable to recover it. 00:34:14.123 [2024-07-26 01:16:44.236824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.123 [2024-07-26 01:16:44.236850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.123 qpair failed and we were unable to recover it. 00:34:14.123 [2024-07-26 01:16:44.236960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.123 [2024-07-26 01:16:44.236985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.123 qpair failed and we were unable to recover it. 00:34:14.123 [2024-07-26 01:16:44.237089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.123 [2024-07-26 01:16:44.237115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.123 qpair failed and we were unable to recover it. 00:34:14.123 [2024-07-26 01:16:44.237224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.123 [2024-07-26 01:16:44.237249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.123 qpair failed and we were unable to recover it. 00:34:14.123 [2024-07-26 01:16:44.237389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.123 [2024-07-26 01:16:44.237414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.123 qpair failed and we were unable to recover it. 00:34:14.123 [2024-07-26 01:16:44.237553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.123 [2024-07-26 01:16:44.237578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.123 qpair failed and we were unable to recover it. 00:34:14.123 [2024-07-26 01:16:44.237682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.123 [2024-07-26 01:16:44.237708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.123 qpair failed and we were unable to recover it. 00:34:14.123 [2024-07-26 01:16:44.237814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.123 [2024-07-26 01:16:44.237844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.123 qpair failed and we were unable to recover it. 00:34:14.123 [2024-07-26 01:16:44.237983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.123 [2024-07-26 01:16:44.238008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.123 qpair failed and we were unable to recover it. 00:34:14.123 [2024-07-26 01:16:44.238147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.123 [2024-07-26 01:16:44.238172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.123 qpair failed and we were unable to recover it. 00:34:14.123 [2024-07-26 01:16:44.238278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.123 [2024-07-26 01:16:44.238303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.123 qpair failed and we were unable to recover it. 00:34:14.123 [2024-07-26 01:16:44.238439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.123 [2024-07-26 01:16:44.238465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.123 qpair failed and we were unable to recover it. 00:34:14.123 [2024-07-26 01:16:44.238602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.123 [2024-07-26 01:16:44.238627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.123 qpair failed and we were unable to recover it. 00:34:14.123 [2024-07-26 01:16:44.238759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.123 [2024-07-26 01:16:44.238784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.123 qpair failed and we were unable to recover it. 00:34:14.123 [2024-07-26 01:16:44.238915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.123 [2024-07-26 01:16:44.238940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.123 qpair failed and we were unable to recover it. 00:34:14.123 [2024-07-26 01:16:44.239102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.123 [2024-07-26 01:16:44.239128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.123 qpair failed and we were unable to recover it. 00:34:14.123 [2024-07-26 01:16:44.239262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.123 [2024-07-26 01:16:44.239287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.123 qpair failed and we were unable to recover it. 00:34:14.123 [2024-07-26 01:16:44.239415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.123 [2024-07-26 01:16:44.239439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.123 qpair failed and we were unable to recover it. 00:34:14.123 [2024-07-26 01:16:44.239569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.123 [2024-07-26 01:16:44.239594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.123 qpair failed and we were unable to recover it. 00:34:14.123 [2024-07-26 01:16:44.239700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.123 [2024-07-26 01:16:44.239726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.123 qpair failed and we were unable to recover it. 00:34:14.123 [2024-07-26 01:16:44.239859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.123 [2024-07-26 01:16:44.239883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.123 qpair failed and we were unable to recover it. 00:34:14.123 [2024-07-26 01:16:44.240022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.123 [2024-07-26 01:16:44.240047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.123 qpair failed and we were unable to recover it. 00:34:14.123 [2024-07-26 01:16:44.240155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.123 [2024-07-26 01:16:44.240180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.123 qpair failed and we were unable to recover it. 00:34:14.123 [2024-07-26 01:16:44.240318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.123 [2024-07-26 01:16:44.240343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.123 qpair failed and we were unable to recover it. 00:34:14.123 [2024-07-26 01:16:44.240478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.123 [2024-07-26 01:16:44.240503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.123 qpair failed and we were unable to recover it. 00:34:14.123 [2024-07-26 01:16:44.240629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.123 [2024-07-26 01:16:44.240655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.123 qpair failed and we were unable to recover it. 00:34:14.123 [2024-07-26 01:16:44.240766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.123 [2024-07-26 01:16:44.240791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.123 qpair failed and we were unable to recover it. 00:34:14.123 [2024-07-26 01:16:44.240896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.123 [2024-07-26 01:16:44.240923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.123 qpair failed and we were unable to recover it. 00:34:14.123 [2024-07-26 01:16:44.241046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.123 [2024-07-26 01:16:44.241077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.123 qpair failed and we were unable to recover it. 00:34:14.123 [2024-07-26 01:16:44.241218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.123 [2024-07-26 01:16:44.241243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.123 qpair failed and we were unable to recover it. 00:34:14.123 [2024-07-26 01:16:44.241373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.123 [2024-07-26 01:16:44.241398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.123 qpair failed and we were unable to recover it. 00:34:14.123 [2024-07-26 01:16:44.241529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.123 [2024-07-26 01:16:44.241554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.123 qpair failed and we were unable to recover it. 00:34:14.123 [2024-07-26 01:16:44.241664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.123 [2024-07-26 01:16:44.241689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.123 qpair failed and we were unable to recover it. 00:34:14.123 [2024-07-26 01:16:44.241853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.123 [2024-07-26 01:16:44.241878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.123 qpair failed and we were unable to recover it. 00:34:14.123 [2024-07-26 01:16:44.242030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.123 [2024-07-26 01:16:44.242077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.124 qpair failed and we were unable to recover it. 00:34:14.124 [2024-07-26 01:16:44.242232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.124 [2024-07-26 01:16:44.242261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.124 qpair failed and we were unable to recover it. 00:34:14.124 [2024-07-26 01:16:44.242429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.124 [2024-07-26 01:16:44.242455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.124 qpair failed and we were unable to recover it. 00:34:14.124 [2024-07-26 01:16:44.242589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.124 [2024-07-26 01:16:44.242614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.124 qpair failed and we were unable to recover it. 00:34:14.124 [2024-07-26 01:16:44.242721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.124 [2024-07-26 01:16:44.242747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.124 qpair failed and we were unable to recover it. 00:34:14.124 [2024-07-26 01:16:44.242911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.124 [2024-07-26 01:16:44.242936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.124 qpair failed and we were unable to recover it. 00:34:14.124 [2024-07-26 01:16:44.243075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.124 [2024-07-26 01:16:44.243101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.124 qpair failed and we were unable to recover it. 00:34:14.124 [2024-07-26 01:16:44.243208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.124 [2024-07-26 01:16:44.243235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.124 qpair failed and we were unable to recover it. 00:34:14.124 [2024-07-26 01:16:44.243395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.124 [2024-07-26 01:16:44.243421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.124 qpair failed and we were unable to recover it. 00:34:14.124 [2024-07-26 01:16:44.243555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.124 [2024-07-26 01:16:44.243580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.124 qpair failed and we were unable to recover it. 00:34:14.124 [2024-07-26 01:16:44.243715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.124 [2024-07-26 01:16:44.243740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.124 qpair failed and we were unable to recover it. 00:34:14.124 [2024-07-26 01:16:44.243853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.124 [2024-07-26 01:16:44.243878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.124 qpair failed and we were unable to recover it. 00:34:14.124 [2024-07-26 01:16:44.244015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.124 [2024-07-26 01:16:44.244042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.124 qpair failed and we were unable to recover it. 00:34:14.124 [2024-07-26 01:16:44.244190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.124 [2024-07-26 01:16:44.244215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.124 qpair failed and we were unable to recover it. 00:34:14.124 [2024-07-26 01:16:44.244355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.124 [2024-07-26 01:16:44.244380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.124 qpair failed and we were unable to recover it. 00:34:14.124 [2024-07-26 01:16:44.244483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.124 [2024-07-26 01:16:44.244508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.124 qpair failed and we were unable to recover it. 00:34:14.124 [2024-07-26 01:16:44.244623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.124 [2024-07-26 01:16:44.244648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.124 qpair failed and we were unable to recover it. 00:34:14.124 [2024-07-26 01:16:44.244752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.124 [2024-07-26 01:16:44.244777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.124 qpair failed and we were unable to recover it. 00:34:14.124 [2024-07-26 01:16:44.244954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.124 [2024-07-26 01:16:44.244994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.124 qpair failed and we were unable to recover it. 00:34:14.124 [2024-07-26 01:16:44.245164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.124 [2024-07-26 01:16:44.245192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.124 qpair failed and we were unable to recover it. 00:34:14.124 [2024-07-26 01:16:44.245319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.124 [2024-07-26 01:16:44.245344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.124 qpair failed and we were unable to recover it. 00:34:14.124 [2024-07-26 01:16:44.245475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.124 [2024-07-26 01:16:44.245501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.124 qpair failed and we were unable to recover it. 00:34:14.124 [2024-07-26 01:16:44.245642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.124 [2024-07-26 01:16:44.245669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.124 qpair failed and we were unable to recover it. 00:34:14.124 [2024-07-26 01:16:44.245835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.124 [2024-07-26 01:16:44.245860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.124 qpair failed and we were unable to recover it. 00:34:14.124 [2024-07-26 01:16:44.245966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.124 [2024-07-26 01:16:44.245993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.124 qpair failed and we were unable to recover it. 00:34:14.124 [2024-07-26 01:16:44.246155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.124 [2024-07-26 01:16:44.246181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.124 qpair failed and we were unable to recover it. 00:34:14.124 [2024-07-26 01:16:44.246342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.124 [2024-07-26 01:16:44.246367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.124 qpair failed and we were unable to recover it. 00:34:14.124 [2024-07-26 01:16:44.246474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.124 [2024-07-26 01:16:44.246499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.124 qpair failed and we were unable to recover it. 00:34:14.124 [2024-07-26 01:16:44.246644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.124 [2024-07-26 01:16:44.246669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.124 qpair failed and we were unable to recover it. 00:34:14.124 [2024-07-26 01:16:44.246804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.124 [2024-07-26 01:16:44.246829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.124 qpair failed and we were unable to recover it. 00:34:14.124 [2024-07-26 01:16:44.246963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.124 [2024-07-26 01:16:44.246988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.124 qpair failed and we were unable to recover it. 00:34:14.124 [2024-07-26 01:16:44.247102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.124 [2024-07-26 01:16:44.247127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.124 qpair failed and we were unable to recover it. 00:34:14.124 [2024-07-26 01:16:44.247292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.124 [2024-07-26 01:16:44.247317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.124 qpair failed and we were unable to recover it. 00:34:14.124 [2024-07-26 01:16:44.247453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.124 [2024-07-26 01:16:44.247479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.124 qpair failed and we were unable to recover it. 00:34:14.124 [2024-07-26 01:16:44.247611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.124 [2024-07-26 01:16:44.247636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.124 qpair failed and we were unable to recover it. 00:34:14.124 [2024-07-26 01:16:44.247739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.124 [2024-07-26 01:16:44.247764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.124 qpair failed and we were unable to recover it. 00:34:14.124 [2024-07-26 01:16:44.247897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.124 [2024-07-26 01:16:44.247922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.124 qpair failed and we were unable to recover it. 00:34:14.124 [2024-07-26 01:16:44.248033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.124 [2024-07-26 01:16:44.248064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.124 qpair failed and we were unable to recover it. 00:34:14.125 [2024-07-26 01:16:44.248200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.125 [2024-07-26 01:16:44.248225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.125 qpair failed and we were unable to recover it. 00:34:14.125 [2024-07-26 01:16:44.248324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.125 [2024-07-26 01:16:44.248349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.125 qpair failed and we were unable to recover it. 00:34:14.125 [2024-07-26 01:16:44.248451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.125 [2024-07-26 01:16:44.248476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.125 qpair failed and we were unable to recover it. 00:34:14.125 [2024-07-26 01:16:44.248581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.125 [2024-07-26 01:16:44.248610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.125 qpair failed and we were unable to recover it. 00:34:14.125 [2024-07-26 01:16:44.248742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.125 [2024-07-26 01:16:44.248767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.125 qpair failed and we were unable to recover it. 00:34:14.125 [2024-07-26 01:16:44.248922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.125 [2024-07-26 01:16:44.248947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.125 qpair failed and we were unable to recover it. 00:34:14.125 [2024-07-26 01:16:44.249064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.125 [2024-07-26 01:16:44.249090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.125 qpair failed and we were unable to recover it. 00:34:14.125 [2024-07-26 01:16:44.249195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.125 [2024-07-26 01:16:44.249221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.125 qpair failed and we were unable to recover it. 00:34:14.125 [2024-07-26 01:16:44.249354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.125 [2024-07-26 01:16:44.249380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.125 qpair failed and we were unable to recover it. 00:34:14.125 [2024-07-26 01:16:44.249489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.125 [2024-07-26 01:16:44.249514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.125 qpair failed and we were unable to recover it. 00:34:14.125 [2024-07-26 01:16:44.249650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.125 [2024-07-26 01:16:44.249675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.125 qpair failed and we were unable to recover it. 00:34:14.125 [2024-07-26 01:16:44.249814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.125 [2024-07-26 01:16:44.249839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.125 qpair failed and we were unable to recover it. 00:34:14.125 [2024-07-26 01:16:44.249946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.125 [2024-07-26 01:16:44.249971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.125 qpair failed and we were unable to recover it. 00:34:14.125 [2024-07-26 01:16:44.250104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.125 [2024-07-26 01:16:44.250129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.125 qpair failed and we were unable to recover it. 00:34:14.125 [2024-07-26 01:16:44.250261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.125 [2024-07-26 01:16:44.250285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.125 qpair failed and we were unable to recover it. 00:34:14.125 [2024-07-26 01:16:44.250402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.125 [2024-07-26 01:16:44.250427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.125 qpair failed and we were unable to recover it. 00:34:14.125 [2024-07-26 01:16:44.250561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.125 [2024-07-26 01:16:44.250586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.125 qpair failed and we were unable to recover it. 00:34:14.125 [2024-07-26 01:16:44.250748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.125 [2024-07-26 01:16:44.250773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.125 qpair failed and we were unable to recover it. 00:34:14.125 [2024-07-26 01:16:44.250907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.125 [2024-07-26 01:16:44.250932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.125 qpair failed and we were unable to recover it. 00:34:14.125 [2024-07-26 01:16:44.251035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.125 [2024-07-26 01:16:44.251065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.125 qpair failed and we were unable to recover it. 00:34:14.125 [2024-07-26 01:16:44.251183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.125 [2024-07-26 01:16:44.251209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.125 qpair failed and we were unable to recover it. 00:34:14.125 [2024-07-26 01:16:44.251318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.125 [2024-07-26 01:16:44.251345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.125 qpair failed and we were unable to recover it. 00:34:14.125 [2024-07-26 01:16:44.251478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.125 [2024-07-26 01:16:44.251503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.125 qpair failed and we were unable to recover it. 00:34:14.125 [2024-07-26 01:16:44.251647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.125 [2024-07-26 01:16:44.251672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.125 qpair failed and we were unable to recover it. 00:34:14.125 [2024-07-26 01:16:44.251778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.125 [2024-07-26 01:16:44.251803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.125 qpair failed and we were unable to recover it. 00:34:14.125 [2024-07-26 01:16:44.251943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.125 [2024-07-26 01:16:44.251968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.125 qpair failed and we were unable to recover it. 00:34:14.125 [2024-07-26 01:16:44.252103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.125 [2024-07-26 01:16:44.252129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.125 qpair failed and we were unable to recover it. 00:34:14.125 [2024-07-26 01:16:44.252240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.125 [2024-07-26 01:16:44.252266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.125 qpair failed and we were unable to recover it. 00:34:14.125 [2024-07-26 01:16:44.252422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.125 [2024-07-26 01:16:44.252448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.125 qpair failed and we were unable to recover it. 00:34:14.125 [2024-07-26 01:16:44.252572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.125 [2024-07-26 01:16:44.252596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.125 qpair failed and we were unable to recover it. 00:34:14.125 [2024-07-26 01:16:44.252706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.125 [2024-07-26 01:16:44.252736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.125 qpair failed and we were unable to recover it. 00:34:14.125 [2024-07-26 01:16:44.252873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.125 [2024-07-26 01:16:44.252898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.125 qpair failed and we were unable to recover it. 00:34:14.125 [2024-07-26 01:16:44.253034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.125 [2024-07-26 01:16:44.253066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.125 qpair failed and we were unable to recover it. 00:34:14.125 [2024-07-26 01:16:44.253177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.125 [2024-07-26 01:16:44.253202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.125 qpair failed and we were unable to recover it. 00:34:14.125 [2024-07-26 01:16:44.253330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.125 [2024-07-26 01:16:44.253355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.125 qpair failed and we were unable to recover it. 00:34:14.125 [2024-07-26 01:16:44.253464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.125 [2024-07-26 01:16:44.253489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.125 qpair failed and we were unable to recover it. 00:34:14.125 [2024-07-26 01:16:44.253603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.125 [2024-07-26 01:16:44.253628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.125 qpair failed and we were unable to recover it. 00:34:14.125 [2024-07-26 01:16:44.253750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.125 [2024-07-26 01:16:44.253789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.125 qpair failed and we were unable to recover it. 00:34:14.126 [2024-07-26 01:16:44.253933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.126 [2024-07-26 01:16:44.253960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.126 qpair failed and we were unable to recover it. 00:34:14.126 [2024-07-26 01:16:44.254101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.126 [2024-07-26 01:16:44.254128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.126 qpair failed and we were unable to recover it. 00:34:14.126 [2024-07-26 01:16:44.254268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.126 [2024-07-26 01:16:44.254294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.126 qpair failed and we were unable to recover it. 00:34:14.126 [2024-07-26 01:16:44.254456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.126 [2024-07-26 01:16:44.254481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.126 qpair failed and we were unable to recover it. 00:34:14.126 [2024-07-26 01:16:44.254595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.126 [2024-07-26 01:16:44.254621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.126 qpair failed and we were unable to recover it. 00:34:14.126 [2024-07-26 01:16:44.254762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.126 [2024-07-26 01:16:44.254788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.126 qpair failed and we were unable to recover it. 00:34:14.126 [2024-07-26 01:16:44.254958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.126 [2024-07-26 01:16:44.254984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.126 qpair failed and we were unable to recover it. 00:34:14.126 [2024-07-26 01:16:44.255102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.126 [2024-07-26 01:16:44.255129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.126 qpair failed and we were unable to recover it. 00:34:14.126 [2024-07-26 01:16:44.255226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.126 [2024-07-26 01:16:44.255252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.126 qpair failed and we were unable to recover it. 00:34:14.126 [2024-07-26 01:16:44.255356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.126 [2024-07-26 01:16:44.255382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.126 qpair failed and we were unable to recover it. 00:34:14.126 [2024-07-26 01:16:44.255514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.126 [2024-07-26 01:16:44.255539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.126 qpair failed and we were unable to recover it. 00:34:14.126 [2024-07-26 01:16:44.255667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.126 [2024-07-26 01:16:44.255693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.126 qpair failed and we were unable to recover it. 00:34:14.126 [2024-07-26 01:16:44.255799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.126 [2024-07-26 01:16:44.255825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.126 qpair failed and we were unable to recover it. 00:34:14.126 [2024-07-26 01:16:44.255969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.126 [2024-07-26 01:16:44.255996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.126 qpair failed and we were unable to recover it. 00:34:14.126 [2024-07-26 01:16:44.256118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.126 [2024-07-26 01:16:44.256146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.126 qpair failed and we were unable to recover it. 00:34:14.126 [2024-07-26 01:16:44.256281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.126 [2024-07-26 01:16:44.256306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.126 qpair failed and we were unable to recover it. 00:34:14.126 [2024-07-26 01:16:44.256436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.126 [2024-07-26 01:16:44.256461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.126 qpair failed and we were unable to recover it. 00:34:14.126 [2024-07-26 01:16:44.256599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.126 [2024-07-26 01:16:44.256624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.126 qpair failed and we were unable to recover it. 00:34:14.126 [2024-07-26 01:16:44.256760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.126 [2024-07-26 01:16:44.256785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.126 qpair failed and we were unable to recover it. 00:34:14.126 [2024-07-26 01:16:44.256924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.126 [2024-07-26 01:16:44.256954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.126 qpair failed and we were unable to recover it. 00:34:14.126 [2024-07-26 01:16:44.257095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.126 [2024-07-26 01:16:44.257123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.126 qpair failed and we were unable to recover it. 00:34:14.126 [2024-07-26 01:16:44.257265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.126 [2024-07-26 01:16:44.257290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.126 qpair failed and we were unable to recover it. 00:34:14.126 [2024-07-26 01:16:44.257428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.126 [2024-07-26 01:16:44.257454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.126 qpair failed and we were unable to recover it. 00:34:14.126 [2024-07-26 01:16:44.257616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.126 [2024-07-26 01:16:44.257641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.126 qpair failed and we were unable to recover it. 00:34:14.126 [2024-07-26 01:16:44.257806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.126 [2024-07-26 01:16:44.257831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.126 qpair failed and we were unable to recover it. 00:34:14.126 [2024-07-26 01:16:44.257970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.126 [2024-07-26 01:16:44.257996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.126 qpair failed and we were unable to recover it. 00:34:14.126 [2024-07-26 01:16:44.258105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.126 [2024-07-26 01:16:44.258131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.126 qpair failed and we were unable to recover it. 00:34:14.126 [2024-07-26 01:16:44.258234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.126 [2024-07-26 01:16:44.258259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.126 qpair failed and we were unable to recover it. 00:34:14.126 [2024-07-26 01:16:44.258378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.126 [2024-07-26 01:16:44.258405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.126 qpair failed and we were unable to recover it. 00:34:14.126 [2024-07-26 01:16:44.258570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.126 [2024-07-26 01:16:44.258596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.126 qpair failed and we were unable to recover it. 00:34:14.126 [2024-07-26 01:16:44.258704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.126 [2024-07-26 01:16:44.258729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.126 qpair failed and we were unable to recover it. 00:34:14.126 [2024-07-26 01:16:44.258867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.126 [2024-07-26 01:16:44.258892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.126 qpair failed and we were unable to recover it. 00:34:14.126 [2024-07-26 01:16:44.259001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.126 [2024-07-26 01:16:44.259026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.126 qpair failed and we were unable to recover it. 00:34:14.127 [2024-07-26 01:16:44.259197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.127 [2024-07-26 01:16:44.259222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.127 qpair failed and we were unable to recover it. 00:34:14.127 [2024-07-26 01:16:44.259333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.127 [2024-07-26 01:16:44.259359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.127 qpair failed and we were unable to recover it. 00:34:14.127 [2024-07-26 01:16:44.259490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.127 [2024-07-26 01:16:44.259515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.127 qpair failed and we were unable to recover it. 00:34:14.127 [2024-07-26 01:16:44.259647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.127 [2024-07-26 01:16:44.259672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.127 qpair failed and we were unable to recover it. 00:34:14.127 [2024-07-26 01:16:44.259834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.127 [2024-07-26 01:16:44.259859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.127 qpair failed and we were unable to recover it. 00:34:14.127 [2024-07-26 01:16:44.260012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.127 [2024-07-26 01:16:44.260051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.127 qpair failed and we were unable to recover it. 00:34:14.127 [2024-07-26 01:16:44.260185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.127 [2024-07-26 01:16:44.260213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.127 qpair failed and we were unable to recover it. 00:34:14.127 [2024-07-26 01:16:44.260378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.127 [2024-07-26 01:16:44.260405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.127 qpair failed and we were unable to recover it. 00:34:14.127 [2024-07-26 01:16:44.260517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.127 [2024-07-26 01:16:44.260544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.127 qpair failed and we were unable to recover it. 00:34:14.127 [2024-07-26 01:16:44.260705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.127 [2024-07-26 01:16:44.260731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.127 qpair failed and we were unable to recover it. 00:34:14.127 [2024-07-26 01:16:44.260837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.127 [2024-07-26 01:16:44.260863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.127 qpair failed and we were unable to recover it. 00:34:14.127 [2024-07-26 01:16:44.260996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.127 [2024-07-26 01:16:44.261022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.127 qpair failed and we were unable to recover it. 00:34:14.127 [2024-07-26 01:16:44.261165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.127 [2024-07-26 01:16:44.261192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.127 qpair failed and we were unable to recover it. 00:34:14.127 [2024-07-26 01:16:44.261330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.127 [2024-07-26 01:16:44.261362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.127 qpair failed and we were unable to recover it. 00:34:14.127 [2024-07-26 01:16:44.261534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.127 [2024-07-26 01:16:44.261559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.127 qpair failed and we were unable to recover it. 00:34:14.127 [2024-07-26 01:16:44.261671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.127 [2024-07-26 01:16:44.261698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.127 qpair failed and we were unable to recover it. 00:34:14.127 [2024-07-26 01:16:44.261839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.127 [2024-07-26 01:16:44.261865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.127 qpair failed and we were unable to recover it. 00:34:14.127 [2024-07-26 01:16:44.262002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.127 [2024-07-26 01:16:44.262028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.127 qpair failed and we were unable to recover it. 00:34:14.127 [2024-07-26 01:16:44.262144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.127 [2024-07-26 01:16:44.262170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.127 qpair failed and we were unable to recover it. 00:34:14.127 [2024-07-26 01:16:44.262326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.127 [2024-07-26 01:16:44.262352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.127 qpair failed and we were unable to recover it. 00:34:14.127 [2024-07-26 01:16:44.262463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.127 [2024-07-26 01:16:44.262488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.127 qpair failed and we were unable to recover it. 00:34:14.127 [2024-07-26 01:16:44.262598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.127 [2024-07-26 01:16:44.262624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.127 qpair failed and we were unable to recover it. 00:34:14.127 [2024-07-26 01:16:44.262765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.127 [2024-07-26 01:16:44.262792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.127 qpair failed and we were unable to recover it. 00:34:14.127 [2024-07-26 01:16:44.262926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.127 [2024-07-26 01:16:44.262952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.127 qpair failed and we were unable to recover it. 00:34:14.127 [2024-07-26 01:16:44.263083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.127 [2024-07-26 01:16:44.263109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.127 qpair failed and we were unable to recover it. 00:34:14.127 [2024-07-26 01:16:44.263218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.127 [2024-07-26 01:16:44.263243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.127 qpair failed and we were unable to recover it. 00:34:14.127 [2024-07-26 01:16:44.263345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.127 [2024-07-26 01:16:44.263370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.127 qpair failed and we were unable to recover it. 00:34:14.127 [2024-07-26 01:16:44.263486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.127 [2024-07-26 01:16:44.263511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.127 qpair failed and we were unable to recover it. 00:34:14.127 [2024-07-26 01:16:44.263661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.127 [2024-07-26 01:16:44.263687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.127 qpair failed and we were unable to recover it. 00:34:14.127 [2024-07-26 01:16:44.263796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.127 [2024-07-26 01:16:44.263821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.127 qpair failed and we were unable to recover it. 00:34:14.127 [2024-07-26 01:16:44.263922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.127 [2024-07-26 01:16:44.263948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.127 qpair failed and we were unable to recover it. 00:34:14.127 [2024-07-26 01:16:44.264066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.127 [2024-07-26 01:16:44.264091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.127 qpair failed and we were unable to recover it. 00:34:14.127 [2024-07-26 01:16:44.264232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.127 [2024-07-26 01:16:44.264258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.127 qpair failed and we were unable to recover it. 00:34:14.127 [2024-07-26 01:16:44.264391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.127 [2024-07-26 01:16:44.264416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.127 qpair failed and we were unable to recover it. 00:34:14.127 [2024-07-26 01:16:44.264521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.127 [2024-07-26 01:16:44.264546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.127 qpair failed and we were unable to recover it. 00:34:14.127 [2024-07-26 01:16:44.264709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.127 [2024-07-26 01:16:44.264734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.127 qpair failed and we were unable to recover it. 00:34:14.127 [2024-07-26 01:16:44.264844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.127 [2024-07-26 01:16:44.264870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.127 qpair failed and we were unable to recover it. 00:34:14.127 [2024-07-26 01:16:44.264976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.128 [2024-07-26 01:16:44.265002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.128 qpair failed and we were unable to recover it. 00:34:14.128 [2024-07-26 01:16:44.265143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.128 [2024-07-26 01:16:44.265169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.128 qpair failed and we were unable to recover it. 00:34:14.128 [2024-07-26 01:16:44.265307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.128 [2024-07-26 01:16:44.265333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.128 qpair failed and we were unable to recover it. 00:34:14.128 [2024-07-26 01:16:44.265470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.128 [2024-07-26 01:16:44.265499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.128 qpair failed and we were unable to recover it. 00:34:14.128 [2024-07-26 01:16:44.265617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.128 [2024-07-26 01:16:44.265642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.128 qpair failed and we were unable to recover it. 00:34:14.128 [2024-07-26 01:16:44.265770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.128 [2024-07-26 01:16:44.265795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.128 qpair failed and we were unable to recover it. 00:34:14.128 [2024-07-26 01:16:44.265924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.128 [2024-07-26 01:16:44.265950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.128 qpair failed and we were unable to recover it. 00:34:14.128 [2024-07-26 01:16:44.266052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.128 [2024-07-26 01:16:44.266082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.128 qpair failed and we were unable to recover it. 00:34:14.128 [2024-07-26 01:16:44.266259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.128 [2024-07-26 01:16:44.266284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.128 qpair failed and we were unable to recover it. 00:34:14.128 [2024-07-26 01:16:44.266427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.128 [2024-07-26 01:16:44.266453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.128 qpair failed and we were unable to recover it. 00:34:14.128 [2024-07-26 01:16:44.266585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.128 [2024-07-26 01:16:44.266609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.128 qpair failed and we were unable to recover it. 00:34:14.128 [2024-07-26 01:16:44.266744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.128 [2024-07-26 01:16:44.266768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.128 qpair failed and we were unable to recover it. 00:34:14.128 [2024-07-26 01:16:44.266880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.128 [2024-07-26 01:16:44.266906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.128 qpair failed and we were unable to recover it. 00:34:14.128 [2024-07-26 01:16:44.267014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.128 [2024-07-26 01:16:44.267039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.128 qpair failed and we were unable to recover it. 00:34:14.128 [2024-07-26 01:16:44.267180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.128 [2024-07-26 01:16:44.267205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.128 qpair failed and we were unable to recover it. 00:34:14.128 [2024-07-26 01:16:44.267318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.128 [2024-07-26 01:16:44.267343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.128 qpair failed and we were unable to recover it. 00:34:14.128 [2024-07-26 01:16:44.267479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.128 [2024-07-26 01:16:44.267504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.128 qpair failed and we were unable to recover it. 00:34:14.128 [2024-07-26 01:16:44.267648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.128 [2024-07-26 01:16:44.267674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.128 qpair failed and we were unable to recover it. 00:34:14.128 [2024-07-26 01:16:44.267810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.128 [2024-07-26 01:16:44.267836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.128 qpair failed and we were unable to recover it. 00:34:14.128 [2024-07-26 01:16:44.267970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.128 [2024-07-26 01:16:44.267994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.128 qpair failed and we were unable to recover it. 00:34:14.128 [2024-07-26 01:16:44.268116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.128 [2024-07-26 01:16:44.268143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.128 qpair failed and we were unable to recover it. 00:34:14.128 [2024-07-26 01:16:44.268307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.128 [2024-07-26 01:16:44.268333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.128 qpair failed and we were unable to recover it. 00:34:14.128 [2024-07-26 01:16:44.268448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.128 [2024-07-26 01:16:44.268473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.128 qpair failed and we were unable to recover it. 00:34:14.128 [2024-07-26 01:16:44.268617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.128 [2024-07-26 01:16:44.268642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.128 qpair failed and we were unable to recover it. 00:34:14.128 [2024-07-26 01:16:44.268785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.128 [2024-07-26 01:16:44.268810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.128 qpair failed and we were unable to recover it. 00:34:14.128 [2024-07-26 01:16:44.268917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.128 [2024-07-26 01:16:44.268943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.128 qpair failed and we were unable to recover it. 00:34:14.128 [2024-07-26 01:16:44.269076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.128 [2024-07-26 01:16:44.269102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.128 qpair failed and we were unable to recover it. 00:34:14.128 [2024-07-26 01:16:44.269243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.128 [2024-07-26 01:16:44.269268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.128 qpair failed and we were unable to recover it. 00:34:14.128 [2024-07-26 01:16:44.269376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.128 [2024-07-26 01:16:44.269401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.128 qpair failed and we were unable to recover it. 00:34:14.128 [2024-07-26 01:16:44.269534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.128 [2024-07-26 01:16:44.269559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.128 qpair failed and we were unable to recover it. 00:34:14.128 [2024-07-26 01:16:44.269692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.128 [2024-07-26 01:16:44.269721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.128 qpair failed and we were unable to recover it. 00:34:14.128 [2024-07-26 01:16:44.269876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.128 [2024-07-26 01:16:44.269900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.128 qpair failed and we were unable to recover it. 00:34:14.128 [2024-07-26 01:16:44.270029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.128 [2024-07-26 01:16:44.270055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.128 qpair failed and we were unable to recover it. 00:34:14.128 [2024-07-26 01:16:44.270178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.128 [2024-07-26 01:16:44.270203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.128 qpair failed and we were unable to recover it. 00:34:14.128 [2024-07-26 01:16:44.270340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.128 [2024-07-26 01:16:44.270366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.128 qpair failed and we were unable to recover it. 00:34:14.128 [2024-07-26 01:16:44.270500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.128 [2024-07-26 01:16:44.270525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.128 qpair failed and we were unable to recover it. 00:34:14.128 [2024-07-26 01:16:44.270628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.128 [2024-07-26 01:16:44.270653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.128 qpair failed and we were unable to recover it. 00:34:14.128 [2024-07-26 01:16:44.270826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.128 [2024-07-26 01:16:44.270865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.128 qpair failed and we were unable to recover it. 00:34:14.128 [2024-07-26 01:16:44.270984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.128 [2024-07-26 01:16:44.271013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.129 qpair failed and we were unable to recover it. 00:34:14.129 [2024-07-26 01:16:44.271164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.129 [2024-07-26 01:16:44.271192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.129 qpair failed and we were unable to recover it. 00:34:14.129 [2024-07-26 01:16:44.271325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.129 [2024-07-26 01:16:44.271351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.129 qpair failed and we were unable to recover it. 00:34:14.129 [2024-07-26 01:16:44.271489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.129 [2024-07-26 01:16:44.271516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.129 qpair failed and we were unable to recover it. 00:34:14.129 [2024-07-26 01:16:44.271656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.129 [2024-07-26 01:16:44.271683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.129 qpair failed and we were unable to recover it. 00:34:14.129 [2024-07-26 01:16:44.271815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.129 [2024-07-26 01:16:44.271841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.129 qpair failed and we were unable to recover it. 00:34:14.129 [2024-07-26 01:16:44.271988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.129 [2024-07-26 01:16:44.272014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.129 qpair failed and we were unable to recover it. 00:34:14.129 [2024-07-26 01:16:44.272198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.129 [2024-07-26 01:16:44.272225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.129 qpair failed and we were unable to recover it. 00:34:14.129 [2024-07-26 01:16:44.272385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.129 [2024-07-26 01:16:44.272411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.129 qpair failed and we were unable to recover it. 00:34:14.129 [2024-07-26 01:16:44.272515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.129 [2024-07-26 01:16:44.272541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.129 qpair failed and we were unable to recover it. 00:34:14.129 [2024-07-26 01:16:44.272675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.129 [2024-07-26 01:16:44.272701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.129 qpair failed and we were unable to recover it. 00:34:14.129 [2024-07-26 01:16:44.272835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.129 [2024-07-26 01:16:44.272860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.129 qpair failed and we were unable to recover it. 00:34:14.129 [2024-07-26 01:16:44.272998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.129 [2024-07-26 01:16:44.273023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.129 qpair failed and we were unable to recover it. 00:34:14.129 [2024-07-26 01:16:44.273163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.129 [2024-07-26 01:16:44.273190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.129 qpair failed and we were unable to recover it. 00:34:14.129 [2024-07-26 01:16:44.273347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.129 [2024-07-26 01:16:44.273373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.129 qpair failed and we were unable to recover it. 00:34:14.129 [2024-07-26 01:16:44.273480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.129 [2024-07-26 01:16:44.273505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.129 qpair failed and we were unable to recover it. 00:34:14.129 [2024-07-26 01:16:44.273623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.129 [2024-07-26 01:16:44.273650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.129 qpair failed and we were unable to recover it. 00:34:14.129 [2024-07-26 01:16:44.273780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.129 [2024-07-26 01:16:44.273805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.129 qpair failed and we were unable to recover it. 00:34:14.129 [2024-07-26 01:16:44.273951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.129 [2024-07-26 01:16:44.273977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.129 qpair failed and we were unable to recover it. 00:34:14.129 [2024-07-26 01:16:44.274138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.129 [2024-07-26 01:16:44.274168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.129 qpair failed and we were unable to recover it. 00:34:14.129 [2024-07-26 01:16:44.274271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.129 [2024-07-26 01:16:44.274297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.129 qpair failed and we were unable to recover it. 00:34:14.129 [2024-07-26 01:16:44.274431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.129 [2024-07-26 01:16:44.274458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.129 qpair failed and we were unable to recover it. 00:34:14.129 [2024-07-26 01:16:44.274586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.129 [2024-07-26 01:16:44.274612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.129 qpair failed and we were unable to recover it. 00:34:14.129 [2024-07-26 01:16:44.274747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.129 [2024-07-26 01:16:44.274773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.129 qpair failed and we were unable to recover it. 00:34:14.129 [2024-07-26 01:16:44.274909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.129 [2024-07-26 01:16:44.274935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.129 qpair failed and we were unable to recover it. 00:34:14.129 [2024-07-26 01:16:44.275067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.129 [2024-07-26 01:16:44.275094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.129 qpair failed and we were unable to recover it. 00:34:14.129 [2024-07-26 01:16:44.275228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.129 [2024-07-26 01:16:44.275253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.129 qpair failed and we were unable to recover it. 00:34:14.129 [2024-07-26 01:16:44.275365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.129 [2024-07-26 01:16:44.275391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.129 qpair failed and we were unable to recover it. 00:34:14.129 [2024-07-26 01:16:44.275530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.129 [2024-07-26 01:16:44.275556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.129 qpair failed and we were unable to recover it. 00:34:14.129 [2024-07-26 01:16:44.275690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.129 [2024-07-26 01:16:44.275718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.129 qpair failed and we were unable to recover it. 00:34:14.129 [2024-07-26 01:16:44.275877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.129 [2024-07-26 01:16:44.275903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.129 qpair failed and we were unable to recover it. 00:34:14.129 [2024-07-26 01:16:44.276035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.129 [2024-07-26 01:16:44.276072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.129 qpair failed and we were unable to recover it. 00:34:14.129 [2024-07-26 01:16:44.276179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.129 [2024-07-26 01:16:44.276204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.129 qpair failed and we were unable to recover it. 00:34:14.129 [2024-07-26 01:16:44.276315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.129 [2024-07-26 01:16:44.276341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.129 qpair failed and we were unable to recover it. 00:34:14.129 [2024-07-26 01:16:44.276484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.129 [2024-07-26 01:16:44.276509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.129 qpair failed and we were unable to recover it. 00:34:14.129 [2024-07-26 01:16:44.276639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.129 [2024-07-26 01:16:44.276664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.129 qpair failed and we were unable to recover it. 00:34:14.129 [2024-07-26 01:16:44.276775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.129 [2024-07-26 01:16:44.276801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.129 qpair failed and we were unable to recover it. 00:34:14.129 [2024-07-26 01:16:44.276941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.129 [2024-07-26 01:16:44.276967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.129 qpair failed and we were unable to recover it. 00:34:14.129 [2024-07-26 01:16:44.277103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.130 [2024-07-26 01:16:44.277131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.130 qpair failed and we were unable to recover it. 00:34:14.130 [2024-07-26 01:16:44.277267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.130 [2024-07-26 01:16:44.277294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.130 qpair failed and we were unable to recover it. 00:34:14.130 [2024-07-26 01:16:44.277430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.130 [2024-07-26 01:16:44.277456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.130 qpair failed and we were unable to recover it. 00:34:14.130 [2024-07-26 01:16:44.277593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.130 [2024-07-26 01:16:44.277619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.130 qpair failed and we were unable to recover it. 00:34:14.130 [2024-07-26 01:16:44.277724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.130 [2024-07-26 01:16:44.277750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.130 qpair failed and we were unable to recover it. 00:34:14.130 [2024-07-26 01:16:44.277913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.130 [2024-07-26 01:16:44.277939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.130 qpair failed and we were unable to recover it. 00:34:14.130 [2024-07-26 01:16:44.278087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.130 [2024-07-26 01:16:44.278114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.130 qpair failed and we were unable to recover it. 00:34:14.130 [2024-07-26 01:16:44.278227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.130 [2024-07-26 01:16:44.278253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.130 qpair failed and we were unable to recover it. 00:34:14.130 [2024-07-26 01:16:44.278389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.130 [2024-07-26 01:16:44.278415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.130 qpair failed and we were unable to recover it. 00:34:14.130 [2024-07-26 01:16:44.278561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.130 [2024-07-26 01:16:44.278588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.130 qpair failed and we were unable to recover it. 00:34:14.130 [2024-07-26 01:16:44.278692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.130 [2024-07-26 01:16:44.278717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.130 qpair failed and we were unable to recover it. 00:34:14.130 [2024-07-26 01:16:44.278854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.130 [2024-07-26 01:16:44.278879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.130 qpair failed and we were unable to recover it. 00:34:14.130 [2024-07-26 01:16:44.279013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.130 [2024-07-26 01:16:44.279038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.130 qpair failed and we were unable to recover it. 00:34:14.130 [2024-07-26 01:16:44.279156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.130 [2024-07-26 01:16:44.279181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.130 qpair failed and we were unable to recover it. 00:34:14.130 [2024-07-26 01:16:44.279318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.130 [2024-07-26 01:16:44.279343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.130 qpair failed and we were unable to recover it. 00:34:14.130 [2024-07-26 01:16:44.279467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.130 [2024-07-26 01:16:44.279492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.130 qpair failed and we were unable to recover it. 00:34:14.130 [2024-07-26 01:16:44.279660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.130 [2024-07-26 01:16:44.279685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.130 qpair failed and we were unable to recover it. 00:34:14.130 [2024-07-26 01:16:44.279789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.130 [2024-07-26 01:16:44.279814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.130 qpair failed and we were unable to recover it. 00:34:14.130 [2024-07-26 01:16:44.279916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.130 [2024-07-26 01:16:44.279941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.130 qpair failed and we were unable to recover it. 00:34:14.130 [2024-07-26 01:16:44.280087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.130 [2024-07-26 01:16:44.280113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.130 qpair failed and we were unable to recover it. 00:34:14.130 [2024-07-26 01:16:44.280211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.130 [2024-07-26 01:16:44.280237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.130 qpair failed and we were unable to recover it. 00:34:14.130 [2024-07-26 01:16:44.280398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.130 [2024-07-26 01:16:44.280423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.130 qpair failed and we were unable to recover it. 00:34:14.130 [2024-07-26 01:16:44.280554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.130 [2024-07-26 01:16:44.280579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.130 qpair failed and we were unable to recover it. 00:34:14.130 [2024-07-26 01:16:44.280692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.130 [2024-07-26 01:16:44.280717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.130 qpair failed and we were unable to recover it. 00:34:14.130 [2024-07-26 01:16:44.280820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.130 [2024-07-26 01:16:44.280845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.130 qpair failed and we were unable to recover it. 00:34:14.130 [2024-07-26 01:16:44.280952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.130 [2024-07-26 01:16:44.280977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.130 qpair failed and we were unable to recover it. 00:34:14.130 [2024-07-26 01:16:44.281108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.130 [2024-07-26 01:16:44.281134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.130 qpair failed and we were unable to recover it. 00:34:14.130 [2024-07-26 01:16:44.281268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.130 [2024-07-26 01:16:44.281293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.130 qpair failed and we were unable to recover it. 00:34:14.130 [2024-07-26 01:16:44.281425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.130 [2024-07-26 01:16:44.281450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.130 qpair failed and we were unable to recover it. 00:34:14.130 [2024-07-26 01:16:44.281552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.130 [2024-07-26 01:16:44.281577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.130 qpair failed and we were unable to recover it. 00:34:14.130 [2024-07-26 01:16:44.281713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.130 [2024-07-26 01:16:44.281738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.130 qpair failed and we were unable to recover it. 00:34:14.130 [2024-07-26 01:16:44.281855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.130 [2024-07-26 01:16:44.281894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.130 qpair failed and we were unable to recover it. 00:34:14.130 [2024-07-26 01:16:44.282040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.130 [2024-07-26 01:16:44.282076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.130 qpair failed and we were unable to recover it. 00:34:14.130 [2024-07-26 01:16:44.282186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.130 [2024-07-26 01:16:44.282214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.130 qpair failed and we were unable to recover it. 00:34:14.130 [2024-07-26 01:16:44.282322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.130 [2024-07-26 01:16:44.282348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.130 qpair failed and we were unable to recover it. 00:34:14.130 [2024-07-26 01:16:44.282462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.130 [2024-07-26 01:16:44.282492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.130 qpair failed and we were unable to recover it. 00:34:14.130 [2024-07-26 01:16:44.282655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.130 [2024-07-26 01:16:44.282681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.130 qpair failed and we were unable to recover it. 00:34:14.130 [2024-07-26 01:16:44.282792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.130 [2024-07-26 01:16:44.282818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.130 qpair failed and we were unable to recover it. 00:34:14.130 [2024-07-26 01:16:44.282930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.131 [2024-07-26 01:16:44.282956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.131 qpair failed and we were unable to recover it. 00:34:14.131 [2024-07-26 01:16:44.283083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.131 [2024-07-26 01:16:44.283109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.131 qpair failed and we were unable to recover it. 00:34:14.131 [2024-07-26 01:16:44.283242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.131 [2024-07-26 01:16:44.283268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.131 qpair failed and we were unable to recover it. 00:34:14.131 [2024-07-26 01:16:44.283370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.131 [2024-07-26 01:16:44.283395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.131 qpair failed and we were unable to recover it. 00:34:14.131 [2024-07-26 01:16:44.283531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.131 [2024-07-26 01:16:44.283556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.131 qpair failed and we were unable to recover it. 00:34:14.131 [2024-07-26 01:16:44.283655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.131 [2024-07-26 01:16:44.283680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.131 qpair failed and we were unable to recover it. 00:34:14.131 [2024-07-26 01:16:44.283783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.131 [2024-07-26 01:16:44.283809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.131 qpair failed and we were unable to recover it. 00:34:14.131 [2024-07-26 01:16:44.283939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.131 [2024-07-26 01:16:44.283964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.131 qpair failed and we were unable to recover it. 00:34:14.131 [2024-07-26 01:16:44.284066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.131 [2024-07-26 01:16:44.284092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.131 qpair failed and we were unable to recover it. 00:34:14.131 [2024-07-26 01:16:44.284201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.131 [2024-07-26 01:16:44.284227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.131 qpair failed and we were unable to recover it. 00:34:14.131 [2024-07-26 01:16:44.284362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.131 [2024-07-26 01:16:44.284387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.131 qpair failed and we were unable to recover it. 00:34:14.131 [2024-07-26 01:16:44.284524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.131 [2024-07-26 01:16:44.284549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.131 qpair failed and we were unable to recover it. 00:34:14.131 [2024-07-26 01:16:44.284690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.131 [2024-07-26 01:16:44.284715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.131 qpair failed and we were unable to recover it. 00:34:14.131 [2024-07-26 01:16:44.284822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.131 [2024-07-26 01:16:44.284851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.131 qpair failed and we were unable to recover it. 00:34:14.131 [2024-07-26 01:16:44.285014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.131 [2024-07-26 01:16:44.285043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.131 qpair failed and we were unable to recover it. 00:34:14.131 [2024-07-26 01:16:44.285175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.131 [2024-07-26 01:16:44.285201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.131 qpair failed and we were unable to recover it. 00:34:14.131 [2024-07-26 01:16:44.285338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.131 [2024-07-26 01:16:44.285364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.131 qpair failed and we were unable to recover it. 00:34:14.131 [2024-07-26 01:16:44.285499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.131 [2024-07-26 01:16:44.285524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.131 qpair failed and we were unable to recover it. 00:34:14.131 [2024-07-26 01:16:44.285664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.131 [2024-07-26 01:16:44.285690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.131 qpair failed and we were unable to recover it. 00:34:14.131 [2024-07-26 01:16:44.285799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.131 [2024-07-26 01:16:44.285825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.131 qpair failed and we were unable to recover it. 00:34:14.131 [2024-07-26 01:16:44.285993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.131 [2024-07-26 01:16:44.286020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.131 qpair failed and we were unable to recover it. 00:34:14.131 [2024-07-26 01:16:44.286188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.131 [2024-07-26 01:16:44.286215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.131 qpair failed and we were unable to recover it. 00:34:14.131 [2024-07-26 01:16:44.286354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.131 [2024-07-26 01:16:44.286380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.131 qpair failed and we were unable to recover it. 00:34:14.131 [2024-07-26 01:16:44.286510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.131 [2024-07-26 01:16:44.286536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.131 qpair failed and we were unable to recover it. 00:34:14.131 [2024-07-26 01:16:44.286699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.131 [2024-07-26 01:16:44.286730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.131 qpair failed and we were unable to recover it. 00:34:14.131 [2024-07-26 01:16:44.286841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.131 [2024-07-26 01:16:44.286867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.131 qpair failed and we were unable to recover it. 00:34:14.131 [2024-07-26 01:16:44.287034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.131 [2024-07-26 01:16:44.287065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.131 qpair failed and we were unable to recover it. 00:34:14.131 [2024-07-26 01:16:44.287178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.131 [2024-07-26 01:16:44.287204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.131 qpair failed and we were unable to recover it. 00:34:14.131 [2024-07-26 01:16:44.287342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.131 [2024-07-26 01:16:44.287369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.131 qpair failed and we were unable to recover it. 00:34:14.131 [2024-07-26 01:16:44.287528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.131 [2024-07-26 01:16:44.287554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.131 qpair failed and we were unable to recover it. 00:34:14.131 [2024-07-26 01:16:44.287658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.131 [2024-07-26 01:16:44.287684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.131 qpair failed and we were unable to recover it. 00:34:14.131 [2024-07-26 01:16:44.287823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.131 [2024-07-26 01:16:44.287848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.131 qpair failed and we were unable to recover it. 00:34:14.131 [2024-07-26 01:16:44.288015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.131 [2024-07-26 01:16:44.288040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.131 qpair failed and we were unable to recover it. 00:34:14.131 [2024-07-26 01:16:44.288185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-07-26 01:16:44.288212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-07-26 01:16:44.288372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-07-26 01:16:44.288397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-07-26 01:16:44.288532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-07-26 01:16:44.288557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-07-26 01:16:44.288671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-07-26 01:16:44.288697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-07-26 01:16:44.288830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-07-26 01:16:44.288855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-07-26 01:16:44.288995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-07-26 01:16:44.289020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-07-26 01:16:44.289171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-07-26 01:16:44.289197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-07-26 01:16:44.289326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-07-26 01:16:44.289352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-07-26 01:16:44.289500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-07-26 01:16:44.289525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-07-26 01:16:44.289664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-07-26 01:16:44.289690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-07-26 01:16:44.289829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-07-26 01:16:44.289854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-07-26 01:16:44.289961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-07-26 01:16:44.289987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-07-26 01:16:44.290143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-07-26 01:16:44.290169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-07-26 01:16:44.290291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-07-26 01:16:44.290317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-07-26 01:16:44.290421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-07-26 01:16:44.290448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-07-26 01:16:44.290564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-07-26 01:16:44.290589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-07-26 01:16:44.290732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-07-26 01:16:44.290758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-07-26 01:16:44.290865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-07-26 01:16:44.290890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-07-26 01:16:44.291032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-07-26 01:16:44.291064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-07-26 01:16:44.291200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-07-26 01:16:44.291228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-07-26 01:16:44.291345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-07-26 01:16:44.291372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-07-26 01:16:44.291475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-07-26 01:16:44.291501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-07-26 01:16:44.291640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-07-26 01:16:44.291666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-07-26 01:16:44.291783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-07-26 01:16:44.291810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-07-26 01:16:44.291948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-07-26 01:16:44.291974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-07-26 01:16:44.292109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-07-26 01:16:44.292136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-07-26 01:16:44.292270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-07-26 01:16:44.292296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-07-26 01:16:44.292459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-07-26 01:16:44.292484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-07-26 01:16:44.292642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-07-26 01:16:44.292668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-07-26 01:16:44.292795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-07-26 01:16:44.292821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-07-26 01:16:44.292931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-07-26 01:16:44.292958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-07-26 01:16:44.293083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-07-26 01:16:44.293114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-07-26 01:16:44.293262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-07-26 01:16:44.293288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-07-26 01:16:44.293425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-07-26 01:16:44.293451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-07-26 01:16:44.293584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-07-26 01:16:44.293610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-07-26 01:16:44.293737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-07-26 01:16:44.293763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-07-26 01:16:44.293897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-07-26 01:16:44.293923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.132 [2024-07-26 01:16:44.294035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.132 [2024-07-26 01:16:44.294067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.132 qpair failed and we were unable to recover it. 00:34:14.133 [2024-07-26 01:16:44.294202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-07-26 01:16:44.294227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-07-26 01:16:44.294359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-07-26 01:16:44.294384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-07-26 01:16:44.294494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-07-26 01:16:44.294519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-07-26 01:16:44.294691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-07-26 01:16:44.294717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-07-26 01:16:44.294849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-07-26 01:16:44.294875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-07-26 01:16:44.294982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-07-26 01:16:44.295008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-07-26 01:16:44.295151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-07-26 01:16:44.295177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-07-26 01:16:44.295300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-07-26 01:16:44.295326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-07-26 01:16:44.295453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-07-26 01:16:44.295479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-07-26 01:16:44.295616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-07-26 01:16:44.295642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-07-26 01:16:44.295779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-07-26 01:16:44.295806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-07-26 01:16:44.295921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-07-26 01:16:44.295946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-07-26 01:16:44.296085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-07-26 01:16:44.296111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-07-26 01:16:44.296249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-07-26 01:16:44.296275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-07-26 01:16:44.296404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-07-26 01:16:44.296430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-07-26 01:16:44.296537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-07-26 01:16:44.296564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-07-26 01:16:44.296727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-07-26 01:16:44.296753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-07-26 01:16:44.296883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-07-26 01:16:44.296909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-07-26 01:16:44.297012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-07-26 01:16:44.297039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-07-26 01:16:44.297212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-07-26 01:16:44.297239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-07-26 01:16:44.297385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-07-26 01:16:44.297411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-07-26 01:16:44.297522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-07-26 01:16:44.297547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-07-26 01:16:44.297658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-07-26 01:16:44.297684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-07-26 01:16:44.297787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-07-26 01:16:44.297813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-07-26 01:16:44.297945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-07-26 01:16:44.297971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-07-26 01:16:44.298071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-07-26 01:16:44.298097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-07-26 01:16:44.298230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-07-26 01:16:44.298256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-07-26 01:16:44.298385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-07-26 01:16:44.298411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-07-26 01:16:44.298582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-07-26 01:16:44.298608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-07-26 01:16:44.298766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-07-26 01:16:44.298792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-07-26 01:16:44.298924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-07-26 01:16:44.298950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-07-26 01:16:44.299066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-07-26 01:16:44.299094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-07-26 01:16:44.299211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-07-26 01:16:44.299237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-07-26 01:16:44.299348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-07-26 01:16:44.299377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-07-26 01:16:44.299490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-07-26 01:16:44.299517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-07-26 01:16:44.299654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-07-26 01:16:44.299680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-07-26 01:16:44.299823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-07-26 01:16:44.299849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.133 qpair failed and we were unable to recover it. 00:34:14.133 [2024-07-26 01:16:44.299990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.133 [2024-07-26 01:16:44.300016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-07-26 01:16:44.300180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-07-26 01:16:44.300206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-07-26 01:16:44.300348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-07-26 01:16:44.300373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-07-26 01:16:44.300510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-07-26 01:16:44.300536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-07-26 01:16:44.300697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-07-26 01:16:44.300723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-07-26 01:16:44.300859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-07-26 01:16:44.300885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-07-26 01:16:44.301020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-07-26 01:16:44.301046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-07-26 01:16:44.301198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-07-26 01:16:44.301236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-07-26 01:16:44.301392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-07-26 01:16:44.301430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-07-26 01:16:44.301551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-07-26 01:16:44.301579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-07-26 01:16:44.301726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-07-26 01:16:44.301752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-07-26 01:16:44.301855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-07-26 01:16:44.301881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-07-26 01:16:44.301985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-07-26 01:16:44.302010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-07-26 01:16:44.302171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-07-26 01:16:44.302198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-07-26 01:16:44.302330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-07-26 01:16:44.302355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-07-26 01:16:44.302493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-07-26 01:16:44.302518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-07-26 01:16:44.302656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-07-26 01:16:44.302681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-07-26 01:16:44.302816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-07-26 01:16:44.302843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-07-26 01:16:44.302955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-07-26 01:16:44.302980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-07-26 01:16:44.303110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-07-26 01:16:44.303137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-07-26 01:16:44.303277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-07-26 01:16:44.303303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-07-26 01:16:44.303428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-07-26 01:16:44.303454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-07-26 01:16:44.303595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-07-26 01:16:44.303620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-07-26 01:16:44.303739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-07-26 01:16:44.303765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-07-26 01:16:44.303904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-07-26 01:16:44.303930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-07-26 01:16:44.304089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-07-26 01:16:44.304116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-07-26 01:16:44.304219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-07-26 01:16:44.304244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-07-26 01:16:44.304369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-07-26 01:16:44.304394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-07-26 01:16:44.304527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-07-26 01:16:44.304552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-07-26 01:16:44.304690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-07-26 01:16:44.304715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-07-26 01:16:44.304849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-07-26 01:16:44.304875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-07-26 01:16:44.304982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-07-26 01:16:44.305010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-07-26 01:16:44.305140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-07-26 01:16:44.305179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-07-26 01:16:44.305324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-07-26 01:16:44.305351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-07-26 01:16:44.305463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-07-26 01:16:44.305489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-07-26 01:16:44.305605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-07-26 01:16:44.305630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-07-26 01:16:44.305767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-07-26 01:16:44.305792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-07-26 01:16:44.305915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.134 [2024-07-26 01:16:44.305942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.134 qpair failed and we were unable to recover it. 00:34:14.134 [2024-07-26 01:16:44.306121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-07-26 01:16:44.306160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-07-26 01:16:44.306335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-07-26 01:16:44.306362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-07-26 01:16:44.306525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-07-26 01:16:44.306551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-07-26 01:16:44.306687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-07-26 01:16:44.306712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-07-26 01:16:44.306848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-07-26 01:16:44.306874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-07-26 01:16:44.307006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-07-26 01:16:44.307031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-07-26 01:16:44.307171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-07-26 01:16:44.307198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-07-26 01:16:44.307312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-07-26 01:16:44.307338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-07-26 01:16:44.307468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-07-26 01:16:44.307493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-07-26 01:16:44.307614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-07-26 01:16:44.307639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-07-26 01:16:44.307737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-07-26 01:16:44.307762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-07-26 01:16:44.307899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-07-26 01:16:44.307924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-07-26 01:16:44.308078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-07-26 01:16:44.308117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-07-26 01:16:44.308264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-07-26 01:16:44.308292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-07-26 01:16:44.308403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-07-26 01:16:44.308429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-07-26 01:16:44.308536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-07-26 01:16:44.308562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-07-26 01:16:44.308689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-07-26 01:16:44.308715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-07-26 01:16:44.308865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-07-26 01:16:44.308891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-07-26 01:16:44.309052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-07-26 01:16:44.309086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-07-26 01:16:44.309252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-07-26 01:16:44.309278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-07-26 01:16:44.309391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-07-26 01:16:44.309416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-07-26 01:16:44.309578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-07-26 01:16:44.309604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-07-26 01:16:44.309742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-07-26 01:16:44.309768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-07-26 01:16:44.309882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-07-26 01:16:44.309908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-07-26 01:16:44.310010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-07-26 01:16:44.310038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-07-26 01:16:44.310150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-07-26 01:16:44.310181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-07-26 01:16:44.310320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-07-26 01:16:44.310345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-07-26 01:16:44.310478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-07-26 01:16:44.310503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-07-26 01:16:44.310611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-07-26 01:16:44.310636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-07-26 01:16:44.310770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-07-26 01:16:44.310795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-07-26 01:16:44.310923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-07-26 01:16:44.310948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-07-26 01:16:44.311092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-07-26 01:16:44.311119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-07-26 01:16:44.311225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-07-26 01:16:44.311250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-07-26 01:16:44.311402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-07-26 01:16:44.311428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-07-26 01:16:44.311566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-07-26 01:16:44.311592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-07-26 01:16:44.311726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-07-26 01:16:44.311751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-07-26 01:16:44.311896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-07-26 01:16:44.311922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.135 [2024-07-26 01:16:44.312035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.135 [2024-07-26 01:16:44.312072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.135 qpair failed and we were unable to recover it. 00:34:14.136 [2024-07-26 01:16:44.312178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-07-26 01:16:44.312203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-07-26 01:16:44.312344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-07-26 01:16:44.312373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-07-26 01:16:44.312487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-07-26 01:16:44.312513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-07-26 01:16:44.312652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-07-26 01:16:44.312679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-07-26 01:16:44.312845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-07-26 01:16:44.312870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-07-26 01:16:44.313032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-07-26 01:16:44.313065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-07-26 01:16:44.313203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-07-26 01:16:44.313228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-07-26 01:16:44.313362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-07-26 01:16:44.313387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-07-26 01:16:44.313498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-07-26 01:16:44.313524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-07-26 01:16:44.313626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-07-26 01:16:44.313652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-07-26 01:16:44.313788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-07-26 01:16:44.313814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-07-26 01:16:44.313929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-07-26 01:16:44.313955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-07-26 01:16:44.314134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-07-26 01:16:44.314173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-07-26 01:16:44.314321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-07-26 01:16:44.314349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-07-26 01:16:44.314488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-07-26 01:16:44.314518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-07-26 01:16:44.314626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-07-26 01:16:44.314651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-07-26 01:16:44.314812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-07-26 01:16:44.314837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-07-26 01:16:44.314999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-07-26 01:16:44.315024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-07-26 01:16:44.315179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-07-26 01:16:44.315205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-07-26 01:16:44.315312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-07-26 01:16:44.315337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-07-26 01:16:44.315483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-07-26 01:16:44.315509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-07-26 01:16:44.315627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-07-26 01:16:44.315652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-07-26 01:16:44.315784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-07-26 01:16:44.315810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-07-26 01:16:44.315934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-07-26 01:16:44.315960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-07-26 01:16:44.316117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-07-26 01:16:44.316143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-07-26 01:16:44.316243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-07-26 01:16:44.316268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-07-26 01:16:44.316401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-07-26 01:16:44.316427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-07-26 01:16:44.316559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-07-26 01:16:44.316584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-07-26 01:16:44.316723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-07-26 01:16:44.316753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-07-26 01:16:44.316898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-07-26 01:16:44.316924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-07-26 01:16:44.317068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-07-26 01:16:44.317094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-07-26 01:16:44.317227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-07-26 01:16:44.317253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.136 [2024-07-26 01:16:44.317389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.136 [2024-07-26 01:16:44.317415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.136 qpair failed and we were unable to recover it. 00:34:14.137 [2024-07-26 01:16:44.317520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-07-26 01:16:44.317545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-07-26 01:16:44.317682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-07-26 01:16:44.317708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-07-26 01:16:44.317846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-07-26 01:16:44.317872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-07-26 01:16:44.317977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-07-26 01:16:44.318003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-07-26 01:16:44.318159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-07-26 01:16:44.318185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-07-26 01:16:44.318293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-07-26 01:16:44.318318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-07-26 01:16:44.318464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-07-26 01:16:44.318489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-07-26 01:16:44.318591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-07-26 01:16:44.318616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-07-26 01:16:44.318751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-07-26 01:16:44.318776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-07-26 01:16:44.318936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-07-26 01:16:44.318961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-07-26 01:16:44.319100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-07-26 01:16:44.319139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-07-26 01:16:44.319264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-07-26 01:16:44.319291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-07-26 01:16:44.319430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-07-26 01:16:44.319456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-07-26 01:16:44.319589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-07-26 01:16:44.319614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-07-26 01:16:44.319746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-07-26 01:16:44.319772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-07-26 01:16:44.319916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-07-26 01:16:44.319955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-07-26 01:16:44.320124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-07-26 01:16:44.320152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-07-26 01:16:44.320320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-07-26 01:16:44.320346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-07-26 01:16:44.320453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-07-26 01:16:44.320478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-07-26 01:16:44.320610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-07-26 01:16:44.320635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-07-26 01:16:44.320779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-07-26 01:16:44.320805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-07-26 01:16:44.320964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-07-26 01:16:44.320989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-07-26 01:16:44.321108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-07-26 01:16:44.321136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-07-26 01:16:44.321274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-07-26 01:16:44.321299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-07-26 01:16:44.321459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-07-26 01:16:44.321484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-07-26 01:16:44.321586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-07-26 01:16:44.321611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-07-26 01:16:44.321785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-07-26 01:16:44.321810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-07-26 01:16:44.321911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-07-26 01:16:44.321936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-07-26 01:16:44.322072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-07-26 01:16:44.322099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-07-26 01:16:44.322241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-07-26 01:16:44.322266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-07-26 01:16:44.322403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-07-26 01:16:44.322429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-07-26 01:16:44.322559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-07-26 01:16:44.322583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-07-26 01:16:44.322693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-07-26 01:16:44.322718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-07-26 01:16:44.322840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-07-26 01:16:44.322865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-07-26 01:16:44.322996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-07-26 01:16:44.323021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-07-26 01:16:44.323128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-07-26 01:16:44.323158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-07-26 01:16:44.323260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-07-26 01:16:44.323286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.137 [2024-07-26 01:16:44.323418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.137 [2024-07-26 01:16:44.323443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.137 qpair failed and we were unable to recover it. 00:34:14.138 [2024-07-26 01:16:44.323579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-07-26 01:16:44.323604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-07-26 01:16:44.323737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-07-26 01:16:44.323763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-07-26 01:16:44.323899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-07-26 01:16:44.323924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-07-26 01:16:44.324055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-07-26 01:16:44.324086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-07-26 01:16:44.324212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-07-26 01:16:44.324237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-07-26 01:16:44.324389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-07-26 01:16:44.324428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-07-26 01:16:44.324572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-07-26 01:16:44.324600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-07-26 01:16:44.324741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-07-26 01:16:44.324769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-07-26 01:16:44.324915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-07-26 01:16:44.324941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-07-26 01:16:44.325046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-07-26 01:16:44.325079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-07-26 01:16:44.325212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-07-26 01:16:44.325238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-07-26 01:16:44.325350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-07-26 01:16:44.325377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-07-26 01:16:44.325491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-07-26 01:16:44.325517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-07-26 01:16:44.325673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-07-26 01:16:44.325699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-07-26 01:16:44.325825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-07-26 01:16:44.325850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-07-26 01:16:44.325973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-07-26 01:16:44.325999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-07-26 01:16:44.326104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-07-26 01:16:44.326130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-07-26 01:16:44.326263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-07-26 01:16:44.326288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-07-26 01:16:44.326394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-07-26 01:16:44.326419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-07-26 01:16:44.326552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-07-26 01:16:44.326577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-07-26 01:16:44.326716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-07-26 01:16:44.326744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-07-26 01:16:44.326852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-07-26 01:16:44.326878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-07-26 01:16:44.327011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-07-26 01:16:44.327037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-07-26 01:16:44.327179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-07-26 01:16:44.327206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-07-26 01:16:44.327319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-07-26 01:16:44.327345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-07-26 01:16:44.327463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-07-26 01:16:44.327488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-07-26 01:16:44.327624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-07-26 01:16:44.327649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-07-26 01:16:44.327756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-07-26 01:16:44.327782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-07-26 01:16:44.327895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-07-26 01:16:44.327920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-07-26 01:16:44.328053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-07-26 01:16:44.328089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-07-26 01:16:44.328306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-07-26 01:16:44.328332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-07-26 01:16:44.328437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-07-26 01:16:44.328462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-07-26 01:16:44.328616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-07-26 01:16:44.328641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-07-26 01:16:44.328786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-07-26 01:16:44.328811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-07-26 01:16:44.328951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-07-26 01:16:44.328976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-07-26 01:16:44.329114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-07-26 01:16:44.329140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-07-26 01:16:44.329252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-07-26 01:16:44.329278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.138 qpair failed and we were unable to recover it. 00:34:14.138 [2024-07-26 01:16:44.329448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.138 [2024-07-26 01:16:44.329473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-07-26 01:16:44.329577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-07-26 01:16:44.329602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-07-26 01:16:44.329703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-07-26 01:16:44.329729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-07-26 01:16:44.329833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-07-26 01:16:44.329858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-07-26 01:16:44.329995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-07-26 01:16:44.330020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-07-26 01:16:44.330133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-07-26 01:16:44.330158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-07-26 01:16:44.330258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-07-26 01:16:44.330283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-07-26 01:16:44.330386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-07-26 01:16:44.330412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-07-26 01:16:44.330554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-07-26 01:16:44.330579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-07-26 01:16:44.330718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-07-26 01:16:44.330743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-07-26 01:16:44.330874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-07-26 01:16:44.330899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-07-26 01:16:44.331035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-07-26 01:16:44.331065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-07-26 01:16:44.331174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-07-26 01:16:44.331199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-07-26 01:16:44.331335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-07-26 01:16:44.331361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-07-26 01:16:44.331496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-07-26 01:16:44.331522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-07-26 01:16:44.331659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-07-26 01:16:44.331684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-07-26 01:16:44.331823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-07-26 01:16:44.331848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-07-26 01:16:44.332011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-07-26 01:16:44.332037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-07-26 01:16:44.332153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-07-26 01:16:44.332178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-07-26 01:16:44.332308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-07-26 01:16:44.332333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-07-26 01:16:44.332492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-07-26 01:16:44.332518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-07-26 01:16:44.332629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-07-26 01:16:44.332654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-07-26 01:16:44.332786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-07-26 01:16:44.332811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-07-26 01:16:44.332918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-07-26 01:16:44.332943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-07-26 01:16:44.333072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-07-26 01:16:44.333110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-07-26 01:16:44.333221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-07-26 01:16:44.333247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-07-26 01:16:44.333360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-07-26 01:16:44.333387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-07-26 01:16:44.333518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-07-26 01:16:44.333544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-07-26 01:16:44.333681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-07-26 01:16:44.333713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-07-26 01:16:44.333874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-07-26 01:16:44.333901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-07-26 01:16:44.334035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-07-26 01:16:44.334066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-07-26 01:16:44.334249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-07-26 01:16:44.334288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-07-26 01:16:44.334459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-07-26 01:16:44.334487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-07-26 01:16:44.334606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-07-26 01:16:44.334632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-07-26 01:16:44.334749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-07-26 01:16:44.334775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-07-26 01:16:44.334927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-07-26 01:16:44.334953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-07-26 01:16:44.335110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-07-26 01:16:44.335148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-07-26 01:16:44.335291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.139 [2024-07-26 01:16:44.335318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.139 qpair failed and we were unable to recover it. 00:34:14.139 [2024-07-26 01:16:44.335455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-07-26 01:16:44.335481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-07-26 01:16:44.335618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-07-26 01:16:44.335645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-07-26 01:16:44.335757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-07-26 01:16:44.335782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-07-26 01:16:44.335920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-07-26 01:16:44.335946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-07-26 01:16:44.336085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-07-26 01:16:44.336113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-07-26 01:16:44.336276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-07-26 01:16:44.336301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-07-26 01:16:44.336436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-07-26 01:16:44.336461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-07-26 01:16:44.336572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-07-26 01:16:44.336598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-07-26 01:16:44.336729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-07-26 01:16:44.336756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-07-26 01:16:44.336918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-07-26 01:16:44.336943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-07-26 01:16:44.337046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-07-26 01:16:44.337080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-07-26 01:16:44.337185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-07-26 01:16:44.337211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-07-26 01:16:44.337374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-07-26 01:16:44.337400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-07-26 01:16:44.337505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-07-26 01:16:44.337531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-07-26 01:16:44.337669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-07-26 01:16:44.337695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-07-26 01:16:44.337814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-07-26 01:16:44.337840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-07-26 01:16:44.337987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-07-26 01:16:44.338026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-07-26 01:16:44.338193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-07-26 01:16:44.338231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-07-26 01:16:44.338354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-07-26 01:16:44.338381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-07-26 01:16:44.338517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-07-26 01:16:44.338543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-07-26 01:16:44.338680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-07-26 01:16:44.338705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-07-26 01:16:44.338853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-07-26 01:16:44.338878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-07-26 01:16:44.339018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-07-26 01:16:44.339043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-07-26 01:16:44.339159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-07-26 01:16:44.339188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-07-26 01:16:44.339330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-07-26 01:16:44.339357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-07-26 01:16:44.339492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-07-26 01:16:44.339519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-07-26 01:16:44.339676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-07-26 01:16:44.339702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-07-26 01:16:44.339838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-07-26 01:16:44.339864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-07-26 01:16:44.339967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-07-26 01:16:44.339993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-07-26 01:16:44.340122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-07-26 01:16:44.340149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-07-26 01:16:44.340317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-07-26 01:16:44.340348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-07-26 01:16:44.340487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-07-26 01:16:44.340513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-07-26 01:16:44.340663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-07-26 01:16:44.340688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.140 qpair failed and we were unable to recover it. 00:34:14.140 [2024-07-26 01:16:44.340803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.140 [2024-07-26 01:16:44.340829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-07-26 01:16:44.340963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-07-26 01:16:44.340988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-07-26 01:16:44.341101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-07-26 01:16:44.341127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-07-26 01:16:44.341295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-07-26 01:16:44.341321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-07-26 01:16:44.341477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-07-26 01:16:44.341503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-07-26 01:16:44.341606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-07-26 01:16:44.341632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-07-26 01:16:44.341792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-07-26 01:16:44.341819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-07-26 01:16:44.341929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-07-26 01:16:44.341955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-07-26 01:16:44.342087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-07-26 01:16:44.342114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-07-26 01:16:44.342222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-07-26 01:16:44.342248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-07-26 01:16:44.342385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-07-26 01:16:44.342412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-07-26 01:16:44.342550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-07-26 01:16:44.342576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-07-26 01:16:44.342687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-07-26 01:16:44.342715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-07-26 01:16:44.342875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-07-26 01:16:44.342901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-07-26 01:16:44.343068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-07-26 01:16:44.343096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-07-26 01:16:44.343251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-07-26 01:16:44.343277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-07-26 01:16:44.343413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-07-26 01:16:44.343438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-07-26 01:16:44.343599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-07-26 01:16:44.343623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-07-26 01:16:44.343757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-07-26 01:16:44.343782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-07-26 01:16:44.343943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-07-26 01:16:44.343968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-07-26 01:16:44.344079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-07-26 01:16:44.344107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-07-26 01:16:44.344222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-07-26 01:16:44.344249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-07-26 01:16:44.344377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-07-26 01:16:44.344402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-07-26 01:16:44.344529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-07-26 01:16:44.344555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-07-26 01:16:44.344695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-07-26 01:16:44.344725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-07-26 01:16:44.344827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-07-26 01:16:44.344853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-07-26 01:16:44.344958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-07-26 01:16:44.344984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-07-26 01:16:44.345102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-07-26 01:16:44.345128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-07-26 01:16:44.345234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-07-26 01:16:44.345259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-07-26 01:16:44.345368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-07-26 01:16:44.345393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-07-26 01:16:44.345503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-07-26 01:16:44.345528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-07-26 01:16:44.345659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-07-26 01:16:44.345684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-07-26 01:16:44.345791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-07-26 01:16:44.345819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-07-26 01:16:44.345927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-07-26 01:16:44.345953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-07-26 01:16:44.346084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-07-26 01:16:44.346110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-07-26 01:16:44.346244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-07-26 01:16:44.346270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-07-26 01:16:44.346411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-07-26 01:16:44.346437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.141 [2024-07-26 01:16:44.346569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.141 [2024-07-26 01:16:44.346595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.141 qpair failed and we were unable to recover it. 00:34:14.142 [2024-07-26 01:16:44.346768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.142 [2024-07-26 01:16:44.346795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.142 qpair failed and we were unable to recover it. 00:34:14.142 [2024-07-26 01:16:44.346934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.142 [2024-07-26 01:16:44.346959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.142 qpair failed and we were unable to recover it. 00:34:14.142 [2024-07-26 01:16:44.347071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.142 [2024-07-26 01:16:44.347097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.142 qpair failed and we were unable to recover it. 00:34:14.142 [2024-07-26 01:16:44.347199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.142 [2024-07-26 01:16:44.347224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.142 qpair failed and we were unable to recover it. 00:34:14.142 [2024-07-26 01:16:44.347330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.142 [2024-07-26 01:16:44.347355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.142 qpair failed and we were unable to recover it. 00:34:14.142 [2024-07-26 01:16:44.347456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.142 [2024-07-26 01:16:44.347481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.142 qpair failed and we were unable to recover it. 00:34:14.142 [2024-07-26 01:16:44.347620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.142 [2024-07-26 01:16:44.347647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.142 qpair failed and we were unable to recover it. 00:34:14.142 [2024-07-26 01:16:44.347787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.142 [2024-07-26 01:16:44.347813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.142 qpair failed and we were unable to recover it. 00:34:14.142 [2024-07-26 01:16:44.347920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.142 [2024-07-26 01:16:44.347946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.142 qpair failed and we were unable to recover it. 00:34:14.142 [2024-07-26 01:16:44.348056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.142 [2024-07-26 01:16:44.348094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.142 qpair failed and we were unable to recover it. 00:34:14.142 [2024-07-26 01:16:44.348240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.142 [2024-07-26 01:16:44.348267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.142 qpair failed and we were unable to recover it. 00:34:14.142 [2024-07-26 01:16:44.348397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.142 [2024-07-26 01:16:44.348423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.142 qpair failed and we were unable to recover it. 00:34:14.142 [2024-07-26 01:16:44.348560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.142 [2024-07-26 01:16:44.348586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.142 qpair failed and we were unable to recover it. 00:34:14.142 [2024-07-26 01:16:44.348698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.142 [2024-07-26 01:16:44.348725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.142 qpair failed and we were unable to recover it. 00:34:14.142 [2024-07-26 01:16:44.348837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.142 [2024-07-26 01:16:44.348863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.142 qpair failed and we were unable to recover it. 00:34:14.142 [2024-07-26 01:16:44.349024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.142 [2024-07-26 01:16:44.349050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.142 qpair failed and we were unable to recover it. 00:34:14.142 [2024-07-26 01:16:44.349166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.142 [2024-07-26 01:16:44.349191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.142 qpair failed and we were unable to recover it. 00:34:14.142 [2024-07-26 01:16:44.349320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.142 [2024-07-26 01:16:44.349345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.142 qpair failed and we were unable to recover it. 00:34:14.142 [2024-07-26 01:16:44.349478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.142 [2024-07-26 01:16:44.349503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.142 qpair failed and we were unable to recover it. 00:34:14.142 [2024-07-26 01:16:44.349601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.142 [2024-07-26 01:16:44.349627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.142 qpair failed and we were unable to recover it. 00:34:14.142 [2024-07-26 01:16:44.349728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.142 [2024-07-26 01:16:44.349753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.142 qpair failed and we were unable to recover it. 00:34:14.142 [2024-07-26 01:16:44.349856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.142 [2024-07-26 01:16:44.349883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.142 qpair failed and we were unable to recover it. 00:34:14.142 [2024-07-26 01:16:44.350017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.142 [2024-07-26 01:16:44.350043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.142 qpair failed and we were unable to recover it. 00:34:14.142 [2024-07-26 01:16:44.350214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.142 [2024-07-26 01:16:44.350240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.142 qpair failed and we were unable to recover it. 00:34:14.142 [2024-07-26 01:16:44.350348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.142 [2024-07-26 01:16:44.350374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.142 qpair failed and we were unable to recover it. 00:34:14.142 [2024-07-26 01:16:44.350534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.142 [2024-07-26 01:16:44.350560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.142 qpair failed and we were unable to recover it. 00:34:14.142 [2024-07-26 01:16:44.350663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.142 [2024-07-26 01:16:44.350688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.142 qpair failed and we were unable to recover it. 00:34:14.142 [2024-07-26 01:16:44.350828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.142 [2024-07-26 01:16:44.350855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.142 qpair failed and we were unable to recover it. 00:34:14.142 [2024-07-26 01:16:44.350970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.142 [2024-07-26 01:16:44.350996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.142 qpair failed and we were unable to recover it. 00:34:14.142 [2024-07-26 01:16:44.351134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.142 [2024-07-26 01:16:44.351161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.142 qpair failed and we were unable to recover it. 00:34:14.142 [2024-07-26 01:16:44.351297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.142 [2024-07-26 01:16:44.351322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.142 qpair failed and we were unable to recover it. 00:34:14.142 [2024-07-26 01:16:44.351433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.142 [2024-07-26 01:16:44.351460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.142 qpair failed and we were unable to recover it. 00:34:14.142 [2024-07-26 01:16:44.351570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.142 [2024-07-26 01:16:44.351596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.142 qpair failed and we were unable to recover it. 00:34:14.142 [2024-07-26 01:16:44.351731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.142 [2024-07-26 01:16:44.351757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.142 qpair failed and we were unable to recover it. 00:34:14.142 [2024-07-26 01:16:44.351872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.142 [2024-07-26 01:16:44.351898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.142 qpair failed and we were unable to recover it. 00:34:14.142 [2024-07-26 01:16:44.352036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.142 [2024-07-26 01:16:44.352072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.142 qpair failed and we were unable to recover it. 00:34:14.142 [2024-07-26 01:16:44.352212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.142 [2024-07-26 01:16:44.352237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.142 qpair failed and we were unable to recover it. 00:34:14.142 [2024-07-26 01:16:44.352415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.143 [2024-07-26 01:16:44.352441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.143 qpair failed and we were unable to recover it. 00:34:14.143 [2024-07-26 01:16:44.352577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.143 [2024-07-26 01:16:44.352605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.143 qpair failed and we were unable to recover it. 00:34:14.143 [2024-07-26 01:16:44.352713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.143 [2024-07-26 01:16:44.352740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.143 qpair failed and we were unable to recover it. 00:34:14.143 [2024-07-26 01:16:44.352906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.143 [2024-07-26 01:16:44.352932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.143 qpair failed and we were unable to recover it. 00:34:14.143 [2024-07-26 01:16:44.353065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.143 [2024-07-26 01:16:44.353091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.143 qpair failed and we were unable to recover it. 00:34:14.143 [2024-07-26 01:16:44.353198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.143 [2024-07-26 01:16:44.353225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.143 qpair failed and we were unable to recover it. 00:34:14.143 [2024-07-26 01:16:44.353364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.143 [2024-07-26 01:16:44.353390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.143 qpair failed and we were unable to recover it. 00:34:14.143 [2024-07-26 01:16:44.353531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.143 [2024-07-26 01:16:44.353557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.143 qpair failed and we were unable to recover it. 00:34:14.143 [2024-07-26 01:16:44.353702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.143 [2024-07-26 01:16:44.353729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.143 qpair failed and we were unable to recover it. 00:34:14.143 [2024-07-26 01:16:44.353866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.143 [2024-07-26 01:16:44.353891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.143 qpair failed and we were unable to recover it. 00:34:14.143 [2024-07-26 01:16:44.354043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.143 [2024-07-26 01:16:44.354074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.143 qpair failed and we were unable to recover it. 00:34:14.143 [2024-07-26 01:16:44.354211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.143 [2024-07-26 01:16:44.354236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.143 qpair failed and we were unable to recover it. 00:34:14.143 [2024-07-26 01:16:44.354353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.143 [2024-07-26 01:16:44.354378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.143 qpair failed and we were unable to recover it. 00:34:14.143 [2024-07-26 01:16:44.354517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.143 [2024-07-26 01:16:44.354542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.143 qpair failed and we were unable to recover it. 00:34:14.143 [2024-07-26 01:16:44.354699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.143 [2024-07-26 01:16:44.354723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.143 qpair failed and we were unable to recover it. 00:34:14.143 [2024-07-26 01:16:44.354885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.143 [2024-07-26 01:16:44.354911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.143 qpair failed and we were unable to recover it. 00:34:14.143 [2024-07-26 01:16:44.355046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.143 [2024-07-26 01:16:44.355082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.143 qpair failed and we were unable to recover it. 00:34:14.143 [2024-07-26 01:16:44.355235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.143 [2024-07-26 01:16:44.355263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.143 qpair failed and we were unable to recover it. 00:34:14.143 [2024-07-26 01:16:44.355412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.143 [2024-07-26 01:16:44.355439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.143 qpair failed and we were unable to recover it. 00:34:14.143 [2024-07-26 01:16:44.355581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.143 [2024-07-26 01:16:44.355607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.143 qpair failed and we were unable to recover it. 00:34:14.143 [2024-07-26 01:16:44.355764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.143 [2024-07-26 01:16:44.355790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.143 qpair failed and we were unable to recover it. 00:34:14.143 [2024-07-26 01:16:44.355949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.143 [2024-07-26 01:16:44.355975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.143 qpair failed and we were unable to recover it. 00:34:14.143 [2024-07-26 01:16:44.356089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.143 [2024-07-26 01:16:44.356115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.143 qpair failed and we were unable to recover it. 00:34:14.143 [2024-07-26 01:16:44.356249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.143 [2024-07-26 01:16:44.356275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.143 qpair failed and we were unable to recover it. 00:34:14.143 [2024-07-26 01:16:44.356406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.143 [2024-07-26 01:16:44.356432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.143 qpair failed and we were unable to recover it. 00:34:14.143 [2024-07-26 01:16:44.356568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.143 [2024-07-26 01:16:44.356594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.143 qpair failed and we were unable to recover it. 00:34:14.143 [2024-07-26 01:16:44.356743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.143 [2024-07-26 01:16:44.356769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.143 qpair failed and we were unable to recover it. 00:34:14.143 [2024-07-26 01:16:44.356914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.143 [2024-07-26 01:16:44.356939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.143 qpair failed and we were unable to recover it. 00:34:14.143 [2024-07-26 01:16:44.357046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.143 [2024-07-26 01:16:44.357076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.143 qpair failed and we were unable to recover it. 00:34:14.143 [2024-07-26 01:16:44.357179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.143 [2024-07-26 01:16:44.357204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.143 qpair failed and we were unable to recover it. 00:34:14.143 [2024-07-26 01:16:44.357357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.143 [2024-07-26 01:16:44.357383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.143 qpair failed and we were unable to recover it. 00:34:14.143 [2024-07-26 01:16:44.357513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.143 [2024-07-26 01:16:44.357538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.143 qpair failed and we were unable to recover it. 00:34:14.143 [2024-07-26 01:16:44.357697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.143 [2024-07-26 01:16:44.357722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.143 qpair failed and we were unable to recover it. 00:34:14.143 [2024-07-26 01:16:44.357833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.143 [2024-07-26 01:16:44.357858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.143 qpair failed and we were unable to recover it. 00:34:14.143 [2024-07-26 01:16:44.357960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.143 [2024-07-26 01:16:44.357985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.143 qpair failed and we were unable to recover it. 00:34:14.143 [2024-07-26 01:16:44.358114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.143 [2024-07-26 01:16:44.358139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.143 qpair failed and we were unable to recover it. 00:34:14.143 [2024-07-26 01:16:44.358277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.143 [2024-07-26 01:16:44.358302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.143 qpair failed and we were unable to recover it. 00:34:14.143 [2024-07-26 01:16:44.358406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.143 [2024-07-26 01:16:44.358430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.143 qpair failed and we were unable to recover it. 00:34:14.143 [2024-07-26 01:16:44.358565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.144 [2024-07-26 01:16:44.358590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.144 qpair failed and we were unable to recover it. 00:34:14.144 [2024-07-26 01:16:44.358696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.144 [2024-07-26 01:16:44.358721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.144 qpair failed and we were unable to recover it. 00:34:14.144 [2024-07-26 01:16:44.358838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.144 [2024-07-26 01:16:44.358863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.144 qpair failed and we were unable to recover it. 00:34:14.144 [2024-07-26 01:16:44.359020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.144 [2024-07-26 01:16:44.359045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.144 qpair failed and we were unable to recover it. 00:34:14.144 [2024-07-26 01:16:44.359188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.144 [2024-07-26 01:16:44.359213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.144 qpair failed and we were unable to recover it. 00:34:14.144 [2024-07-26 01:16:44.359346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.144 [2024-07-26 01:16:44.359375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.144 qpair failed and we were unable to recover it. 00:34:14.144 [2024-07-26 01:16:44.359481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.144 [2024-07-26 01:16:44.359506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.144 qpair failed and we were unable to recover it. 00:34:14.144 [2024-07-26 01:16:44.359637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.144 [2024-07-26 01:16:44.359662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.144 qpair failed and we were unable to recover it. 00:34:14.144 [2024-07-26 01:16:44.359788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.144 [2024-07-26 01:16:44.359813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.144 qpair failed and we were unable to recover it. 00:34:14.144 [2024-07-26 01:16:44.359968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.144 [2024-07-26 01:16:44.359993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.144 qpair failed and we were unable to recover it. 00:34:14.144 [2024-07-26 01:16:44.360090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.144 [2024-07-26 01:16:44.360116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.144 qpair failed and we were unable to recover it. 00:34:14.144 [2024-07-26 01:16:44.360257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.144 [2024-07-26 01:16:44.360282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.144 qpair failed and we were unable to recover it. 00:34:14.144 [2024-07-26 01:16:44.360426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.144 [2024-07-26 01:16:44.360451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.144 qpair failed and we were unable to recover it. 00:34:14.144 [2024-07-26 01:16:44.360548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.144 [2024-07-26 01:16:44.360573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.144 qpair failed and we were unable to recover it. 00:34:14.144 [2024-07-26 01:16:44.360708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.144 [2024-07-26 01:16:44.360735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.144 qpair failed and we were unable to recover it. 00:34:14.144 [2024-07-26 01:16:44.360878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.144 [2024-07-26 01:16:44.360904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.144 qpair failed and we were unable to recover it. 00:34:14.144 [2024-07-26 01:16:44.361012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.144 [2024-07-26 01:16:44.361037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.144 qpair failed and we were unable to recover it. 00:34:14.144 [2024-07-26 01:16:44.361180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.144 [2024-07-26 01:16:44.361206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.144 qpair failed and we were unable to recover it. 00:34:14.144 [2024-07-26 01:16:44.361308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.144 [2024-07-26 01:16:44.361333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.144 qpair failed and we were unable to recover it. 00:34:14.144 [2024-07-26 01:16:44.361503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.144 [2024-07-26 01:16:44.361540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.144 qpair failed and we were unable to recover it. 00:34:14.144 [2024-07-26 01:16:44.361687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.144 [2024-07-26 01:16:44.361714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.144 qpair failed and we were unable to recover it. 00:34:14.144 [2024-07-26 01:16:44.361847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.144 [2024-07-26 01:16:44.361875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.144 qpair failed and we were unable to recover it. 00:34:14.144 [2024-07-26 01:16:44.362004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.144 [2024-07-26 01:16:44.362032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.144 qpair failed and we were unable to recover it. 00:34:14.144 [2024-07-26 01:16:44.362161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.144 [2024-07-26 01:16:44.362190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.144 qpair failed and we were unable to recover it. 00:34:14.144 [2024-07-26 01:16:44.362329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.144 [2024-07-26 01:16:44.362356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.144 qpair failed and we were unable to recover it. 00:34:14.144 [2024-07-26 01:16:44.362476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.144 [2024-07-26 01:16:44.362503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.144 qpair failed and we were unable to recover it. 00:34:14.144 [2024-07-26 01:16:44.362614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.144 [2024-07-26 01:16:44.362642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.144 qpair failed and we were unable to recover it. 00:34:14.144 [2024-07-26 01:16:44.362781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.144 [2024-07-26 01:16:44.362808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.144 qpair failed and we were unable to recover it. 00:34:14.144 [2024-07-26 01:16:44.362963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.144 [2024-07-26 01:16:44.363003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.144 qpair failed and we were unable to recover it. 00:34:14.144 [2024-07-26 01:16:44.363146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.144 [2024-07-26 01:16:44.363174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.144 qpair failed and we were unable to recover it. 00:34:14.144 [2024-07-26 01:16:44.363299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.144 [2024-07-26 01:16:44.363326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.144 qpair failed and we were unable to recover it. 00:34:14.144 [2024-07-26 01:16:44.363485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.144 [2024-07-26 01:16:44.363511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.144 qpair failed and we were unable to recover it. 00:34:14.144 [2024-07-26 01:16:44.363643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.144 [2024-07-26 01:16:44.363668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.144 qpair failed and we were unable to recover it. 00:34:14.144 [2024-07-26 01:16:44.363823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.144 [2024-07-26 01:16:44.363864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.144 qpair failed and we were unable to recover it. 00:34:14.144 [2024-07-26 01:16:44.363994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.144 [2024-07-26 01:16:44.364023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.144 qpair failed and we were unable to recover it. 00:34:14.144 [2024-07-26 01:16:44.364148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.144 [2024-07-26 01:16:44.364176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.144 qpair failed and we were unable to recover it. 00:34:14.144 [2024-07-26 01:16:44.364314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.144 [2024-07-26 01:16:44.364341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.144 qpair failed and we were unable to recover it. 00:34:14.144 [2024-07-26 01:16:44.364448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.144 [2024-07-26 01:16:44.364475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.144 qpair failed and we were unable to recover it. 00:34:14.144 [2024-07-26 01:16:44.364636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.145 [2024-07-26 01:16:44.364662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.145 qpair failed and we were unable to recover it. 00:34:14.145 [2024-07-26 01:16:44.364794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.145 [2024-07-26 01:16:44.364821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.145 qpair failed and we were unable to recover it. 00:34:14.145 [2024-07-26 01:16:44.364953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.145 [2024-07-26 01:16:44.364979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.145 qpair failed and we were unable to recover it. 00:34:14.145 [2024-07-26 01:16:44.365110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.145 [2024-07-26 01:16:44.365137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.145 qpair failed and we were unable to recover it. 00:34:14.145 [2024-07-26 01:16:44.365274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.145 [2024-07-26 01:16:44.365302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.145 qpair failed and we were unable to recover it. 00:34:14.145 [2024-07-26 01:16:44.365436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.145 [2024-07-26 01:16:44.365463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.145 qpair failed and we were unable to recover it. 00:34:14.145 [2024-07-26 01:16:44.365594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.145 [2024-07-26 01:16:44.365620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.145 qpair failed and we were unable to recover it. 00:34:14.145 [2024-07-26 01:16:44.365750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.145 [2024-07-26 01:16:44.365777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.145 qpair failed and we were unable to recover it. 00:34:14.145 [2024-07-26 01:16:44.365923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.145 [2024-07-26 01:16:44.365950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.145 qpair failed and we were unable to recover it. 00:34:14.145 [2024-07-26 01:16:44.366089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.145 [2024-07-26 01:16:44.366116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.145 qpair failed and we were unable to recover it. 00:34:14.145 [2024-07-26 01:16:44.366262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.145 [2024-07-26 01:16:44.366290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.145 qpair failed and we were unable to recover it. 00:34:14.145 [2024-07-26 01:16:44.366407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.145 [2024-07-26 01:16:44.366433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.145 qpair failed and we were unable to recover it. 00:34:14.145 [2024-07-26 01:16:44.366540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.145 [2024-07-26 01:16:44.366565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.145 qpair failed and we were unable to recover it. 00:34:14.145 [2024-07-26 01:16:44.366701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.145 [2024-07-26 01:16:44.366727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.145 qpair failed and we were unable to recover it. 00:34:14.145 [2024-07-26 01:16:44.366835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.145 [2024-07-26 01:16:44.366860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.145 qpair failed and we were unable to recover it. 00:34:14.145 [2024-07-26 01:16:44.366990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.145 [2024-07-26 01:16:44.367016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.145 qpair failed and we were unable to recover it. 00:34:14.145 [2024-07-26 01:16:44.367161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.145 [2024-07-26 01:16:44.367189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.145 qpair failed and we were unable to recover it. 00:34:14.145 [2024-07-26 01:16:44.367298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.145 [2024-07-26 01:16:44.367325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.145 qpair failed and we were unable to recover it. 00:34:14.145 [2024-07-26 01:16:44.367468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.145 [2024-07-26 01:16:44.367495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.145 qpair failed and we were unable to recover it. 00:34:14.145 [2024-07-26 01:16:44.367645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.145 [2024-07-26 01:16:44.367672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.145 qpair failed and we were unable to recover it. 00:34:14.145 [2024-07-26 01:16:44.367835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.145 [2024-07-26 01:16:44.367862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.145 qpair failed and we were unable to recover it. 00:34:14.145 [2024-07-26 01:16:44.367969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.145 [2024-07-26 01:16:44.368000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.145 qpair failed and we were unable to recover it. 00:34:14.145 [2024-07-26 01:16:44.368121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.145 [2024-07-26 01:16:44.368148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.145 qpair failed and we were unable to recover it. 00:34:14.145 [2024-07-26 01:16:44.368286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.145 [2024-07-26 01:16:44.368312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.145 qpair failed and we were unable to recover it. 00:34:14.145 [2024-07-26 01:16:44.368469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.145 [2024-07-26 01:16:44.368495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.145 qpair failed and we were unable to recover it. 00:34:14.145 [2024-07-26 01:16:44.368600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.145 [2024-07-26 01:16:44.368626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.145 qpair failed and we were unable to recover it. 00:34:14.145 [2024-07-26 01:16:44.368757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.145 [2024-07-26 01:16:44.368783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.145 qpair failed and we were unable to recover it. 00:34:14.145 [2024-07-26 01:16:44.368891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.145 [2024-07-26 01:16:44.368917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.145 qpair failed and we were unable to recover it. 00:34:14.145 [2024-07-26 01:16:44.369048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.145 [2024-07-26 01:16:44.369080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.145 qpair failed and we were unable to recover it. 00:34:14.145 [2024-07-26 01:16:44.369183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.145 [2024-07-26 01:16:44.369210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.145 qpair failed and we were unable to recover it. 00:34:14.145 [2024-07-26 01:16:44.369323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.145 [2024-07-26 01:16:44.369349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.145 qpair failed and we were unable to recover it. 00:34:14.145 [2024-07-26 01:16:44.369451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.145 [2024-07-26 01:16:44.369476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.145 qpair failed and we were unable to recover it. 00:34:14.145 [2024-07-26 01:16:44.369635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.145 [2024-07-26 01:16:44.369661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.145 qpair failed and we were unable to recover it. 00:34:14.145 [2024-07-26 01:16:44.369799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.146 [2024-07-26 01:16:44.369826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.146 qpair failed and we were unable to recover it. 00:34:14.146 [2024-07-26 01:16:44.369933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.146 [2024-07-26 01:16:44.369959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.146 qpair failed and we were unable to recover it. 00:34:14.146 [2024-07-26 01:16:44.370112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.146 [2024-07-26 01:16:44.370154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.146 qpair failed and we were unable to recover it. 00:34:14.146 [2024-07-26 01:16:44.370279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.146 [2024-07-26 01:16:44.370308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.146 qpair failed and we were unable to recover it. 00:34:14.146 [2024-07-26 01:16:44.370480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.146 [2024-07-26 01:16:44.370508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.146 qpair failed and we were unable to recover it. 00:34:14.146 [2024-07-26 01:16:44.370642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.146 [2024-07-26 01:16:44.370669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.146 qpair failed and we were unable to recover it. 00:34:14.146 [2024-07-26 01:16:44.370807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.146 [2024-07-26 01:16:44.370835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.146 qpair failed and we were unable to recover it. 00:34:14.146 [2024-07-26 01:16:44.370974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.146 [2024-07-26 01:16:44.371003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.146 qpair failed and we were unable to recover it. 00:34:14.146 [2024-07-26 01:16:44.371131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.146 [2024-07-26 01:16:44.371158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.146 qpair failed and we were unable to recover it. 00:34:14.146 [2024-07-26 01:16:44.371321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.146 [2024-07-26 01:16:44.371348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.146 qpair failed and we were unable to recover it. 00:34:14.146 [2024-07-26 01:16:44.371479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.146 [2024-07-26 01:16:44.371506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.146 qpair failed and we were unable to recover it. 00:34:14.146 [2024-07-26 01:16:44.371639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.146 [2024-07-26 01:16:44.371666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.146 qpair failed and we were unable to recover it. 00:34:14.146 [2024-07-26 01:16:44.371808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.146 [2024-07-26 01:16:44.371835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.146 qpair failed and we were unable to recover it. 00:34:14.146 [2024-07-26 01:16:44.371978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.146 [2024-07-26 01:16:44.372007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.146 qpair failed and we were unable to recover it. 00:34:14.146 [2024-07-26 01:16:44.372180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.146 [2024-07-26 01:16:44.372208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.146 qpair failed and we were unable to recover it. 00:34:14.146 [2024-07-26 01:16:44.372343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.146 [2024-07-26 01:16:44.372373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.146 qpair failed and we were unable to recover it. 00:34:14.146 [2024-07-26 01:16:44.372503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.146 [2024-07-26 01:16:44.372530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.146 qpair failed and we were unable to recover it. 00:34:14.146 [2024-07-26 01:16:44.372647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.146 [2024-07-26 01:16:44.372673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.146 qpair failed and we were unable to recover it. 00:34:14.146 [2024-07-26 01:16:44.372773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.146 [2024-07-26 01:16:44.372799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.146 qpair failed and we were unable to recover it. 00:34:14.146 [2024-07-26 01:16:44.372906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.146 [2024-07-26 01:16:44.372932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.146 qpair failed and we were unable to recover it. 00:34:14.146 [2024-07-26 01:16:44.373069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.146 [2024-07-26 01:16:44.373096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.146 qpair failed and we were unable to recover it. 00:34:14.146 [2024-07-26 01:16:44.373236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.146 [2024-07-26 01:16:44.373262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.146 qpair failed and we were unable to recover it. 00:34:14.146 [2024-07-26 01:16:44.373413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.146 [2024-07-26 01:16:44.373439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.146 qpair failed and we were unable to recover it. 00:34:14.146 [2024-07-26 01:16:44.373541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.146 [2024-07-26 01:16:44.373568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.146 qpair failed and we were unable to recover it. 00:34:14.146 [2024-07-26 01:16:44.373703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.146 [2024-07-26 01:16:44.373729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.146 qpair failed and we were unable to recover it. 00:34:14.146 [2024-07-26 01:16:44.373872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.146 [2024-07-26 01:16:44.373899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.146 qpair failed and we were unable to recover it. 00:34:14.146 [2024-07-26 01:16:44.374005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.146 [2024-07-26 01:16:44.374032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.146 qpair failed and we were unable to recover it. 00:34:14.146 [2024-07-26 01:16:44.374168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.146 [2024-07-26 01:16:44.374194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.146 qpair failed and we were unable to recover it. 00:34:14.146 [2024-07-26 01:16:44.374330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.146 [2024-07-26 01:16:44.374356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.146 qpair failed and we were unable to recover it. 00:34:14.146 [2024-07-26 01:16:44.374519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.146 [2024-07-26 01:16:44.374546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.146 qpair failed and we were unable to recover it. 00:34:14.146 [2024-07-26 01:16:44.374705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.146 [2024-07-26 01:16:44.374731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.146 qpair failed and we were unable to recover it. 00:34:14.146 [2024-07-26 01:16:44.374873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.146 [2024-07-26 01:16:44.374900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.146 qpair failed and we were unable to recover it. 00:34:14.146 [2024-07-26 01:16:44.375027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.146 [2024-07-26 01:16:44.375053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.146 qpair failed and we were unable to recover it. 00:34:14.146 [2024-07-26 01:16:44.375173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.146 [2024-07-26 01:16:44.375199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.146 qpair failed and we were unable to recover it. 00:34:14.146 [2024-07-26 01:16:44.375309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.146 [2024-07-26 01:16:44.375335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.146 qpair failed and we were unable to recover it. 00:34:14.146 [2024-07-26 01:16:44.375445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.146 [2024-07-26 01:16:44.375471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.146 qpair failed and we were unable to recover it. 00:34:14.146 [2024-07-26 01:16:44.375571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.146 [2024-07-26 01:16:44.375598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.146 qpair failed and we were unable to recover it. 00:34:14.146 [2024-07-26 01:16:44.375762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.146 [2024-07-26 01:16:44.375788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.146 qpair failed and we were unable to recover it. 00:34:14.146 [2024-07-26 01:16:44.375890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.147 [2024-07-26 01:16:44.375916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.147 qpair failed and we were unable to recover it. 00:34:14.147 [2024-07-26 01:16:44.376043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.147 [2024-07-26 01:16:44.376074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.147 qpair failed and we were unable to recover it. 00:34:14.147 [2024-07-26 01:16:44.376188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.147 [2024-07-26 01:16:44.376214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.147 qpair failed and we were unable to recover it. 00:34:14.147 [2024-07-26 01:16:44.376318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.147 [2024-07-26 01:16:44.376345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.147 qpair failed and we were unable to recover it. 00:34:14.147 [2024-07-26 01:16:44.376455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.147 [2024-07-26 01:16:44.376481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.147 qpair failed and we were unable to recover it. 00:34:14.147 [2024-07-26 01:16:44.376623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.147 [2024-07-26 01:16:44.376650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.147 qpair failed and we were unable to recover it. 00:34:14.147 [2024-07-26 01:16:44.376789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.147 [2024-07-26 01:16:44.376815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.147 qpair failed and we were unable to recover it. 00:34:14.147 [2024-07-26 01:16:44.376949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.147 [2024-07-26 01:16:44.376975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.147 qpair failed and we were unable to recover it. 00:34:14.147 [2024-07-26 01:16:44.377119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.147 [2024-07-26 01:16:44.377160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.147 qpair failed and we were unable to recover it. 00:34:14.147 [2024-07-26 01:16:44.377330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.147 [2024-07-26 01:16:44.377359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.147 qpair failed and we were unable to recover it. 00:34:14.147 [2024-07-26 01:16:44.377465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.147 [2024-07-26 01:16:44.377493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.147 qpair failed and we were unable to recover it. 00:34:14.147 [2024-07-26 01:16:44.377652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.147 [2024-07-26 01:16:44.377679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.147 qpair failed and we were unable to recover it. 00:34:14.147 [2024-07-26 01:16:44.377808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.147 [2024-07-26 01:16:44.377835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.147 qpair failed and we were unable to recover it. 00:34:14.147 [2024-07-26 01:16:44.377969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.147 [2024-07-26 01:16:44.377996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.147 qpair failed and we were unable to recover it. 00:34:14.147 [2024-07-26 01:16:44.378135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.147 [2024-07-26 01:16:44.378164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.147 qpair failed and we were unable to recover it. 00:34:14.147 [2024-07-26 01:16:44.378276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.147 [2024-07-26 01:16:44.378303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.147 qpair failed and we were unable to recover it. 00:34:14.147 [2024-07-26 01:16:44.378439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.147 [2024-07-26 01:16:44.378466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.147 qpair failed and we were unable to recover it. 00:34:14.147 [2024-07-26 01:16:44.378572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.147 [2024-07-26 01:16:44.378598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.147 qpair failed and we were unable to recover it. 00:34:14.147 [2024-07-26 01:16:44.378704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.147 [2024-07-26 01:16:44.378731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.147 qpair failed and we were unable to recover it. 00:34:14.147 [2024-07-26 01:16:44.378835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.147 [2024-07-26 01:16:44.378861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.147 qpair failed and we were unable to recover it. 00:34:14.147 [2024-07-26 01:16:44.379000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.147 [2024-07-26 01:16:44.379028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.147 qpair failed and we were unable to recover it. 00:34:14.147 [2024-07-26 01:16:44.379166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.147 [2024-07-26 01:16:44.379207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.147 qpair failed and we were unable to recover it. 00:34:14.147 [2024-07-26 01:16:44.379376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.147 [2024-07-26 01:16:44.379405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.147 qpair failed and we were unable to recover it. 00:34:14.147 [2024-07-26 01:16:44.379510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.147 [2024-07-26 01:16:44.379538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.147 qpair failed and we were unable to recover it. 00:34:14.147 [2024-07-26 01:16:44.379666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.147 [2024-07-26 01:16:44.379694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.147 qpair failed and we were unable to recover it. 00:34:14.147 [2024-07-26 01:16:44.379857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.147 [2024-07-26 01:16:44.379884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.147 qpair failed and we were unable to recover it. 00:34:14.147 [2024-07-26 01:16:44.380000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.147 [2024-07-26 01:16:44.380028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.147 qpair failed and we were unable to recover it. 00:34:14.147 [2024-07-26 01:16:44.380209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.147 [2024-07-26 01:16:44.380236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.147 qpair failed and we were unable to recover it. 00:34:14.147 [2024-07-26 01:16:44.380372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.147 [2024-07-26 01:16:44.380398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.147 qpair failed and we were unable to recover it. 00:34:14.147 [2024-07-26 01:16:44.380540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.147 [2024-07-26 01:16:44.380567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.147 qpair failed and we were unable to recover it. 00:34:14.147 [2024-07-26 01:16:44.380676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.147 [2024-07-26 01:16:44.380703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.147 qpair failed and we were unable to recover it. 00:34:14.147 [2024-07-26 01:16:44.380866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.147 [2024-07-26 01:16:44.380896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.147 qpair failed and we were unable to recover it. 00:34:14.147 [2024-07-26 01:16:44.381008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.147 [2024-07-26 01:16:44.381035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.147 qpair failed and we were unable to recover it. 00:34:14.147 [2024-07-26 01:16:44.381178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.147 [2024-07-26 01:16:44.381205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.147 qpair failed and we were unable to recover it. 00:34:14.147 [2024-07-26 01:16:44.381306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.147 [2024-07-26 01:16:44.381332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.147 qpair failed and we were unable to recover it. 00:34:14.147 [2024-07-26 01:16:44.381432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.147 [2024-07-26 01:16:44.381459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.147 qpair failed and we were unable to recover it. 00:34:14.147 [2024-07-26 01:16:44.381571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.147 [2024-07-26 01:16:44.381597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.147 qpair failed and we were unable to recover it. 00:34:14.147 [2024-07-26 01:16:44.381701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.147 [2024-07-26 01:16:44.381728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.147 qpair failed and we were unable to recover it. 00:34:14.148 [2024-07-26 01:16:44.381836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.148 [2024-07-26 01:16:44.381863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.148 qpair failed and we were unable to recover it. 00:34:14.148 [2024-07-26 01:16:44.381971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.148 [2024-07-26 01:16:44.381997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.148 qpair failed and we were unable to recover it. 00:34:14.148 [2024-07-26 01:16:44.382128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.148 [2024-07-26 01:16:44.382155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.148 qpair failed and we were unable to recover it. 00:34:14.148 [2024-07-26 01:16:44.382260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.148 [2024-07-26 01:16:44.382287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.148 qpair failed and we were unable to recover it. 00:34:14.148 [2024-07-26 01:16:44.382423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.148 [2024-07-26 01:16:44.382449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.148 qpair failed and we were unable to recover it. 00:34:14.148 [2024-07-26 01:16:44.382564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.148 [2024-07-26 01:16:44.382591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.148 qpair failed and we were unable to recover it. 00:34:14.148 [2024-07-26 01:16:44.382752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.148 [2024-07-26 01:16:44.382778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.148 qpair failed and we were unable to recover it. 00:34:14.148 [2024-07-26 01:16:44.382956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.148 [2024-07-26 01:16:44.383000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.148 qpair failed and we were unable to recover it. 00:34:14.148 [2024-07-26 01:16:44.383174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.148 [2024-07-26 01:16:44.383202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.148 qpair failed and we were unable to recover it. 00:34:14.148 [2024-07-26 01:16:44.383338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.148 [2024-07-26 01:16:44.383365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.148 qpair failed and we were unable to recover it. 00:34:14.148 [2024-07-26 01:16:44.383504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.148 [2024-07-26 01:16:44.383530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.148 qpair failed and we were unable to recover it. 00:34:14.148 [2024-07-26 01:16:44.383636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.148 [2024-07-26 01:16:44.383663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.148 qpair failed and we were unable to recover it. 00:34:14.148 [2024-07-26 01:16:44.383798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.148 [2024-07-26 01:16:44.383825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.148 qpair failed and we were unable to recover it. 00:34:14.148 [2024-07-26 01:16:44.383957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.148 [2024-07-26 01:16:44.383984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.148 qpair failed and we were unable to recover it. 00:34:14.148 [2024-07-26 01:16:44.384124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.148 [2024-07-26 01:16:44.384150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.148 qpair failed and we were unable to recover it. 00:34:14.148 [2024-07-26 01:16:44.384287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.148 [2024-07-26 01:16:44.384314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.148 qpair failed and we were unable to recover it. 00:34:14.148 [2024-07-26 01:16:44.384429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.148 [2024-07-26 01:16:44.384455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.148 qpair failed and we were unable to recover it. 00:34:14.148 [2024-07-26 01:16:44.384571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.148 [2024-07-26 01:16:44.384598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.148 qpair failed and we were unable to recover it. 00:34:14.148 [2024-07-26 01:16:44.384710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.148 [2024-07-26 01:16:44.384736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.148 qpair failed and we were unable to recover it. 00:34:14.148 [2024-07-26 01:16:44.384862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.148 [2024-07-26 01:16:44.384889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.148 qpair failed and we were unable to recover it. 00:34:14.148 [2024-07-26 01:16:44.384994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.148 [2024-07-26 01:16:44.385020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.148 qpair failed and we were unable to recover it. 00:34:14.148 [2024-07-26 01:16:44.385178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.148 [2024-07-26 01:16:44.385208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.148 qpair failed and we were unable to recover it. 00:34:14.148 [2024-07-26 01:16:44.385357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.148 [2024-07-26 01:16:44.385398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.148 qpair failed and we were unable to recover it. 00:34:14.148 [2024-07-26 01:16:44.385516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.148 [2024-07-26 01:16:44.385544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.148 qpair failed and we were unable to recover it. 00:34:14.148 [2024-07-26 01:16:44.385667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.148 [2024-07-26 01:16:44.385694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.148 qpair failed and we were unable to recover it. 00:34:14.148 [2024-07-26 01:16:44.385819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.148 [2024-07-26 01:16:44.385846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.148 qpair failed and we were unable to recover it. 00:34:14.148 [2024-07-26 01:16:44.385954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.148 [2024-07-26 01:16:44.385981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.148 qpair failed and we were unable to recover it. 00:34:14.148 [2024-07-26 01:16:44.386093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.148 [2024-07-26 01:16:44.386121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.148 qpair failed and we were unable to recover it. 00:34:14.148 [2024-07-26 01:16:44.386261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.148 [2024-07-26 01:16:44.386288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.148 qpair failed and we were unable to recover it. 00:34:14.148 [2024-07-26 01:16:44.386448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.148 [2024-07-26 01:16:44.386475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.148 qpair failed and we were unable to recover it. 00:34:14.148 [2024-07-26 01:16:44.386620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.148 [2024-07-26 01:16:44.386646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.148 qpair failed and we were unable to recover it. 00:34:14.148 [2024-07-26 01:16:44.386814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.148 [2024-07-26 01:16:44.386843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.148 qpair failed and we were unable to recover it. 00:34:14.148 [2024-07-26 01:16:44.386978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.148 [2024-07-26 01:16:44.387006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.148 qpair failed and we were unable to recover it. 00:34:14.148 [2024-07-26 01:16:44.387119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.148 [2024-07-26 01:16:44.387147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.148 qpair failed and we were unable to recover it. 00:34:14.148 [2024-07-26 01:16:44.387312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.148 [2024-07-26 01:16:44.387339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.148 qpair failed and we were unable to recover it. 00:34:14.148 [2024-07-26 01:16:44.387454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.148 [2024-07-26 01:16:44.387481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.148 qpair failed and we were unable to recover it. 00:34:14.148 [2024-07-26 01:16:44.387643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.148 [2024-07-26 01:16:44.387670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.148 qpair failed and we were unable to recover it. 00:34:14.148 [2024-07-26 01:16:44.387800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.149 [2024-07-26 01:16:44.387827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.149 qpair failed and we were unable to recover it. 00:34:14.149 [2024-07-26 01:16:44.387934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.149 [2024-07-26 01:16:44.387960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.149 qpair failed and we were unable to recover it. 00:34:14.149 [2024-07-26 01:16:44.388121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.149 [2024-07-26 01:16:44.388148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.149 qpair failed and we were unable to recover it. 00:34:14.149 [2024-07-26 01:16:44.388258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.149 [2024-07-26 01:16:44.388285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.149 qpair failed and we were unable to recover it. 00:34:14.149 [2024-07-26 01:16:44.388423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.149 [2024-07-26 01:16:44.388449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.149 qpair failed and we were unable to recover it. 00:34:14.149 [2024-07-26 01:16:44.388553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.149 [2024-07-26 01:16:44.388580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.149 qpair failed and we were unable to recover it. 00:34:14.149 [2024-07-26 01:16:44.388686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.149 [2024-07-26 01:16:44.388713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.149 qpair failed and we were unable to recover it. 00:34:14.149 [2024-07-26 01:16:44.388820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.149 [2024-07-26 01:16:44.388846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.149 qpair failed and we were unable to recover it. 00:34:14.149 [2024-07-26 01:16:44.388972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.149 [2024-07-26 01:16:44.388998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.149 qpair failed and we were unable to recover it. 00:34:14.149 [2024-07-26 01:16:44.389157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.149 [2024-07-26 01:16:44.389184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.149 qpair failed and we were unable to recover it. 00:34:14.149 [2024-07-26 01:16:44.389312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.149 [2024-07-26 01:16:44.389338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.149 qpair failed and we were unable to recover it. 00:34:14.149 [2024-07-26 01:16:44.389480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.149 [2024-07-26 01:16:44.389507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.149 qpair failed and we were unable to recover it. 00:34:14.149 [2024-07-26 01:16:44.389614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.149 [2024-07-26 01:16:44.389640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.149 qpair failed and we were unable to recover it. 00:34:14.149 [2024-07-26 01:16:44.389768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.149 [2024-07-26 01:16:44.389794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.149 qpair failed and we were unable to recover it. 00:34:14.149 [2024-07-26 01:16:44.389898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.149 [2024-07-26 01:16:44.389925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.149 qpair failed and we were unable to recover it. 00:34:14.149 [2024-07-26 01:16:44.390033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.149 [2024-07-26 01:16:44.390066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.149 qpair failed and we were unable to recover it. 00:34:14.149 [2024-07-26 01:16:44.390215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.149 [2024-07-26 01:16:44.390241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.149 qpair failed and we were unable to recover it. 00:34:14.149 [2024-07-26 01:16:44.390349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.149 [2024-07-26 01:16:44.390375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.149 qpair failed and we were unable to recover it. 00:34:14.149 [2024-07-26 01:16:44.390477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.149 [2024-07-26 01:16:44.390504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.149 qpair failed and we were unable to recover it. 00:34:14.149 [2024-07-26 01:16:44.390641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.149 [2024-07-26 01:16:44.390668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.149 qpair failed and we were unable to recover it. 00:34:14.149 [2024-07-26 01:16:44.390799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.149 [2024-07-26 01:16:44.390826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.149 qpair failed and we were unable to recover it. 00:34:14.149 [2024-07-26 01:16:44.390953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.149 [2024-07-26 01:16:44.390979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.149 qpair failed and we were unable to recover it. 00:34:14.149 [2024-07-26 01:16:44.391086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.149 [2024-07-26 01:16:44.391113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.149 qpair failed and we were unable to recover it. 00:34:14.149 [2024-07-26 01:16:44.391248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.149 [2024-07-26 01:16:44.391274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.149 qpair failed and we were unable to recover it. 00:34:14.149 [2024-07-26 01:16:44.391374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.149 [2024-07-26 01:16:44.391407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.149 qpair failed and we were unable to recover it. 00:34:14.149 [2024-07-26 01:16:44.391533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.149 [2024-07-26 01:16:44.391559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.149 qpair failed and we were unable to recover it. 00:34:14.149 [2024-07-26 01:16:44.391684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.149 [2024-07-26 01:16:44.391711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.149 qpair failed and we were unable to recover it. 00:34:14.149 [2024-07-26 01:16:44.391870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.149 [2024-07-26 01:16:44.391896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.149 qpair failed and we were unable to recover it. 00:34:14.149 [2024-07-26 01:16:44.392004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.149 [2024-07-26 01:16:44.392030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.149 qpair failed and we were unable to recover it. 00:34:14.149 [2024-07-26 01:16:44.392148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.149 [2024-07-26 01:16:44.392175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.149 qpair failed and we were unable to recover it. 00:34:14.149 [2024-07-26 01:16:44.392303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.149 [2024-07-26 01:16:44.392329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.149 qpair failed and we were unable to recover it. 00:34:14.149 [2024-07-26 01:16:44.392488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.149 [2024-07-26 01:16:44.392514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.149 qpair failed and we were unable to recover it. 00:34:14.149 [2024-07-26 01:16:44.392621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.149 [2024-07-26 01:16:44.392647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.149 qpair failed and we were unable to recover it. 00:34:14.149 [2024-07-26 01:16:44.392791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.149 [2024-07-26 01:16:44.392818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.149 qpair failed and we were unable to recover it. 00:34:14.149 [2024-07-26 01:16:44.392948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.149 [2024-07-26 01:16:44.392974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.150 qpair failed and we were unable to recover it. 00:34:14.150 [2024-07-26 01:16:44.393127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.150 [2024-07-26 01:16:44.393168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.150 qpair failed and we were unable to recover it. 00:34:14.150 [2024-07-26 01:16:44.393293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.150 [2024-07-26 01:16:44.393321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.150 qpair failed and we were unable to recover it. 00:34:14.150 [2024-07-26 01:16:44.393431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.150 [2024-07-26 01:16:44.393458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.150 qpair failed and we were unable to recover it. 00:34:14.150 [2024-07-26 01:16:44.393627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.150 [2024-07-26 01:16:44.393654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.150 qpair failed and we were unable to recover it. 00:34:14.150 [2024-07-26 01:16:44.393794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.150 [2024-07-26 01:16:44.393821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.150 qpair failed and we were unable to recover it. 00:34:14.150 [2024-07-26 01:16:44.393955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.150 [2024-07-26 01:16:44.393982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.150 qpair failed and we were unable to recover it. 00:34:14.150 [2024-07-26 01:16:44.394118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.150 [2024-07-26 01:16:44.394146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.150 qpair failed and we were unable to recover it. 00:34:14.150 [2024-07-26 01:16:44.394283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.150 [2024-07-26 01:16:44.394310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.150 qpair failed and we were unable to recover it. 00:34:14.150 [2024-07-26 01:16:44.394412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.150 [2024-07-26 01:16:44.394439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.150 qpair failed and we were unable to recover it. 00:34:14.150 [2024-07-26 01:16:44.394584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.150 [2024-07-26 01:16:44.394612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.150 qpair failed and we were unable to recover it. 00:34:14.150 [2024-07-26 01:16:44.394748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.150 [2024-07-26 01:16:44.394774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.150 qpair failed and we were unable to recover it. 00:34:14.150 [2024-07-26 01:16:44.394887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.150 [2024-07-26 01:16:44.394913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.150 qpair failed and we were unable to recover it. 00:34:14.150 [2024-07-26 01:16:44.395045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.150 [2024-07-26 01:16:44.395080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.150 qpair failed and we were unable to recover it. 00:34:14.150 [2024-07-26 01:16:44.395198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.150 [2024-07-26 01:16:44.395225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.150 qpair failed and we were unable to recover it. 00:34:14.150 [2024-07-26 01:16:44.395361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.150 [2024-07-26 01:16:44.395388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.150 qpair failed and we were unable to recover it. 00:34:14.150 [2024-07-26 01:16:44.395546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.150 [2024-07-26 01:16:44.395572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.150 qpair failed and we were unable to recover it. 00:34:14.150 [2024-07-26 01:16:44.395711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.150 [2024-07-26 01:16:44.395742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.150 qpair failed and we were unable to recover it. 00:34:14.150 [2024-07-26 01:16:44.395876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.150 [2024-07-26 01:16:44.395903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.150 qpair failed and we were unable to recover it. 00:34:14.150 [2024-07-26 01:16:44.396016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.150 [2024-07-26 01:16:44.396045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.150 qpair failed and we were unable to recover it. 00:34:14.150 [2024-07-26 01:16:44.396170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.150 [2024-07-26 01:16:44.396199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.150 qpair failed and we were unable to recover it. 00:34:14.150 [2024-07-26 01:16:44.396344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.150 [2024-07-26 01:16:44.396373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.150 qpair failed and we were unable to recover it. 00:34:14.150 [2024-07-26 01:16:44.396488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.150 [2024-07-26 01:16:44.396515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.150 qpair failed and we were unable to recover it. 00:34:14.150 [2024-07-26 01:16:44.396688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.150 [2024-07-26 01:16:44.396715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.150 qpair failed and we were unable to recover it. 00:34:14.150 [2024-07-26 01:16:44.396855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.150 [2024-07-26 01:16:44.396882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.150 qpair failed and we were unable to recover it. 00:34:14.150 [2024-07-26 01:16:44.397036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.150 [2024-07-26 01:16:44.397069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.150 qpair failed and we were unable to recover it. 00:34:14.150 [2024-07-26 01:16:44.397225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.150 [2024-07-26 01:16:44.397265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.150 qpair failed and we were unable to recover it. 00:34:14.150 [2024-07-26 01:16:44.397408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.150 [2024-07-26 01:16:44.397436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.150 qpair failed and we were unable to recover it. 00:34:14.150 [2024-07-26 01:16:44.397607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.150 [2024-07-26 01:16:44.397634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.150 qpair failed and we were unable to recover it. 00:34:14.150 [2024-07-26 01:16:44.397749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.150 [2024-07-26 01:16:44.397777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.150 qpair failed and we were unable to recover it. 00:34:14.150 [2024-07-26 01:16:44.397914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.150 [2024-07-26 01:16:44.397941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.150 qpair failed and we were unable to recover it. 00:34:14.150 [2024-07-26 01:16:44.398086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.150 [2024-07-26 01:16:44.398114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.150 qpair failed and we were unable to recover it. 00:34:14.150 [2024-07-26 01:16:44.398260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.150 [2024-07-26 01:16:44.398287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.150 qpair failed and we were unable to recover it. 00:34:14.150 [2024-07-26 01:16:44.398424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.150 [2024-07-26 01:16:44.398451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.150 qpair failed and we were unable to recover it. 00:34:14.150 [2024-07-26 01:16:44.398597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.150 [2024-07-26 01:16:44.398625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.150 qpair failed and we were unable to recover it. 00:34:14.150 [2024-07-26 01:16:44.398739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.150 [2024-07-26 01:16:44.398765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.150 qpair failed and we were unable to recover it. 00:34:14.150 [2024-07-26 01:16:44.398870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.150 [2024-07-26 01:16:44.398899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.150 qpair failed and we were unable to recover it. 00:34:14.150 [2024-07-26 01:16:44.399015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.150 [2024-07-26 01:16:44.399045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.151 qpair failed and we were unable to recover it. 00:34:14.151 [2024-07-26 01:16:44.399212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.151 [2024-07-26 01:16:44.399253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.151 qpair failed and we were unable to recover it. 00:34:14.151 [2024-07-26 01:16:44.399393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.151 [2024-07-26 01:16:44.399421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.151 qpair failed and we were unable to recover it. 00:34:14.151 [2024-07-26 01:16:44.399530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.151 [2024-07-26 01:16:44.399557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.151 qpair failed and we were unable to recover it. 00:34:14.151 [2024-07-26 01:16:44.399691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.151 [2024-07-26 01:16:44.399717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.151 qpair failed and we were unable to recover it. 00:34:14.151 [2024-07-26 01:16:44.399828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.151 [2024-07-26 01:16:44.399855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.151 qpair failed and we were unable to recover it. 00:34:14.151 [2024-07-26 01:16:44.399988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.151 [2024-07-26 01:16:44.400014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.151 qpair failed and we were unable to recover it. 00:34:14.151 [2024-07-26 01:16:44.400163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.151 [2024-07-26 01:16:44.400193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.151 qpair failed and we were unable to recover it. 00:34:14.151 [2024-07-26 01:16:44.400329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.151 [2024-07-26 01:16:44.400356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.151 qpair failed and we were unable to recover it. 00:34:14.151 [2024-07-26 01:16:44.400462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.151 [2024-07-26 01:16:44.400490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.151 qpair failed and we were unable to recover it. 00:34:14.151 [2024-07-26 01:16:44.400650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.151 [2024-07-26 01:16:44.400677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.151 qpair failed and we were unable to recover it. 00:34:14.151 [2024-07-26 01:16:44.400813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.151 [2024-07-26 01:16:44.400841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.151 qpair failed and we were unable to recover it. 00:34:14.151 [2024-07-26 01:16:44.400983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.151 [2024-07-26 01:16:44.401011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.151 qpair failed and we were unable to recover it. 00:34:14.151 [2024-07-26 01:16:44.401162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.151 [2024-07-26 01:16:44.401189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.151 qpair failed and we were unable to recover it. 00:34:14.151 [2024-07-26 01:16:44.401329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.151 [2024-07-26 01:16:44.401355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.151 qpair failed and we were unable to recover it. 00:34:14.151 [2024-07-26 01:16:44.401461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.151 [2024-07-26 01:16:44.401487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.151 qpair failed and we were unable to recover it. 00:34:14.151 [2024-07-26 01:16:44.401596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.151 [2024-07-26 01:16:44.401622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.151 qpair failed and we were unable to recover it. 00:34:14.151 [2024-07-26 01:16:44.401734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.151 [2024-07-26 01:16:44.401761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.151 qpair failed and we were unable to recover it. 00:34:14.151 [2024-07-26 01:16:44.401922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.151 [2024-07-26 01:16:44.401948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.151 qpair failed and we were unable to recover it. 00:34:14.151 [2024-07-26 01:16:44.402072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.151 [2024-07-26 01:16:44.402103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.151 qpair failed and we were unable to recover it. 00:34:14.151 [2024-07-26 01:16:44.402251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.151 [2024-07-26 01:16:44.402279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.151 qpair failed and we were unable to recover it. 00:34:14.151 [2024-07-26 01:16:44.402429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.151 [2024-07-26 01:16:44.402470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.151 qpair failed and we were unable to recover it. 00:34:14.151 [2024-07-26 01:16:44.402641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.151 [2024-07-26 01:16:44.402670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.151 qpair failed and we were unable to recover it. 00:34:14.151 [2024-07-26 01:16:44.402803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.151 [2024-07-26 01:16:44.402830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.151 qpair failed and we were unable to recover it. 00:34:14.151 [2024-07-26 01:16:44.402932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.151 [2024-07-26 01:16:44.402960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.151 qpair failed and we were unable to recover it. 00:34:14.151 [2024-07-26 01:16:44.403102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.151 [2024-07-26 01:16:44.403130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.151 qpair failed and we were unable to recover it. 00:34:14.151 [2024-07-26 01:16:44.403290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.151 [2024-07-26 01:16:44.403317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.151 qpair failed and we were unable to recover it. 00:34:14.151 [2024-07-26 01:16:44.403469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.151 [2024-07-26 01:16:44.403495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.151 qpair failed and we were unable to recover it. 00:34:14.151 [2024-07-26 01:16:44.403596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.151 [2024-07-26 01:16:44.403622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.151 qpair failed and we were unable to recover it. 00:34:14.151 [2024-07-26 01:16:44.403725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.151 [2024-07-26 01:16:44.403752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.151 qpair failed and we were unable to recover it. 00:34:14.151 [2024-07-26 01:16:44.403884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.151 [2024-07-26 01:16:44.403911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.151 qpair failed and we were unable to recover it. 00:34:14.151 [2024-07-26 01:16:44.404050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.151 [2024-07-26 01:16:44.404083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.151 qpair failed and we were unable to recover it. 00:34:14.151 [2024-07-26 01:16:44.404213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.151 [2024-07-26 01:16:44.404240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.151 qpair failed and we were unable to recover it. 00:34:14.151 [2024-07-26 01:16:44.404350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.151 [2024-07-26 01:16:44.404376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.151 qpair failed and we were unable to recover it. 00:34:14.151 [2024-07-26 01:16:44.404513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.151 [2024-07-26 01:16:44.404541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.151 qpair failed and we were unable to recover it. 00:34:14.151 [2024-07-26 01:16:44.404653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.151 [2024-07-26 01:16:44.404680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.151 qpair failed and we were unable to recover it. 00:34:14.151 [2024-07-26 01:16:44.404786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.151 [2024-07-26 01:16:44.404813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.151 qpair failed and we were unable to recover it. 00:34:14.151 [2024-07-26 01:16:44.404942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.151 [2024-07-26 01:16:44.404968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.151 qpair failed and we were unable to recover it. 00:34:14.151 [2024-07-26 01:16:44.405081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.152 [2024-07-26 01:16:44.405110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.152 qpair failed and we were unable to recover it. 00:34:14.152 [2024-07-26 01:16:44.405252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.152 [2024-07-26 01:16:44.405279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.152 qpair failed and we were unable to recover it. 00:34:14.152 [2024-07-26 01:16:44.405416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.152 [2024-07-26 01:16:44.405443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.152 qpair failed and we were unable to recover it. 00:34:14.152 [2024-07-26 01:16:44.405577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.152 [2024-07-26 01:16:44.405605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.152 qpair failed and we were unable to recover it. 00:34:14.152 [2024-07-26 01:16:44.405771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.152 [2024-07-26 01:16:44.405797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.152 qpair failed and we were unable to recover it. 00:34:14.152 [2024-07-26 01:16:44.405932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.152 [2024-07-26 01:16:44.405959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.152 qpair failed and we were unable to recover it. 00:34:14.152 [2024-07-26 01:16:44.406073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.152 [2024-07-26 01:16:44.406100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.152 qpair failed and we were unable to recover it. 00:34:14.152 [2024-07-26 01:16:44.406213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.152 [2024-07-26 01:16:44.406239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.152 qpair failed and we were unable to recover it. 00:34:14.152 [2024-07-26 01:16:44.406337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.152 [2024-07-26 01:16:44.406364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.152 qpair failed and we were unable to recover it. 00:34:14.152 [2024-07-26 01:16:44.406498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.152 [2024-07-26 01:16:44.406524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.152 qpair failed and we were unable to recover it. 00:34:14.152 [2024-07-26 01:16:44.406659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.152 [2024-07-26 01:16:44.406686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.152 qpair failed and we were unable to recover it. 00:34:14.152 [2024-07-26 01:16:44.406821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.152 [2024-07-26 01:16:44.406848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.152 qpair failed and we were unable to recover it. 00:34:14.152 [2024-07-26 01:16:44.406994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.152 [2024-07-26 01:16:44.407021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.152 qpair failed and we were unable to recover it. 00:34:14.152 [2024-07-26 01:16:44.407161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.152 [2024-07-26 01:16:44.407188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.152 qpair failed and we were unable to recover it. 00:34:14.152 [2024-07-26 01:16:44.407318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.152 [2024-07-26 01:16:44.407345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.152 qpair failed and we were unable to recover it. 00:34:14.152 [2024-07-26 01:16:44.407485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.152 [2024-07-26 01:16:44.407512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.152 qpair failed and we were unable to recover it. 00:34:14.152 [2024-07-26 01:16:44.407642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.152 [2024-07-26 01:16:44.407669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.152 qpair failed and we were unable to recover it. 00:34:14.152 [2024-07-26 01:16:44.407835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.152 [2024-07-26 01:16:44.407863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.152 qpair failed and we were unable to recover it. 00:34:14.152 [2024-07-26 01:16:44.407976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.152 [2024-07-26 01:16:44.408002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.152 qpair failed and we were unable to recover it. 00:34:14.152 [2024-07-26 01:16:44.408141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.152 [2024-07-26 01:16:44.408168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.152 qpair failed and we were unable to recover it. 00:34:14.152 [2024-07-26 01:16:44.408336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.152 [2024-07-26 01:16:44.408363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.152 qpair failed and we were unable to recover it. 00:34:14.152 [2024-07-26 01:16:44.408499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.152 [2024-07-26 01:16:44.408525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.152 qpair failed and we were unable to recover it. 00:34:14.152 [2024-07-26 01:16:44.408626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.152 [2024-07-26 01:16:44.408653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.152 qpair failed and we were unable to recover it. 00:34:14.152 [2024-07-26 01:16:44.408761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.152 [2024-07-26 01:16:44.408789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.152 qpair failed and we were unable to recover it. 00:34:14.152 [2024-07-26 01:16:44.408894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.152 [2024-07-26 01:16:44.408921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.152 qpair failed and we were unable to recover it. 00:34:14.152 [2024-07-26 01:16:44.409036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.152 [2024-07-26 01:16:44.409071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.152 qpair failed and we were unable to recover it. 00:34:14.152 [2024-07-26 01:16:44.409232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.152 [2024-07-26 01:16:44.409262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.152 qpair failed and we were unable to recover it. 00:34:14.152 [2024-07-26 01:16:44.409372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.152 [2024-07-26 01:16:44.409399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.152 qpair failed and we were unable to recover it. 00:34:14.152 [2024-07-26 01:16:44.409539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.152 [2024-07-26 01:16:44.409566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.152 qpair failed and we were unable to recover it. 00:34:14.152 [2024-07-26 01:16:44.409671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.152 [2024-07-26 01:16:44.409698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.152 qpair failed and we were unable to recover it. 00:34:14.152 [2024-07-26 01:16:44.409837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.152 [2024-07-26 01:16:44.409863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.152 qpair failed and we were unable to recover it. 00:34:14.152 [2024-07-26 01:16:44.409976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.152 [2024-07-26 01:16:44.410005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.152 qpair failed and we were unable to recover it. 00:34:14.152 [2024-07-26 01:16:44.410145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.152 [2024-07-26 01:16:44.410173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.152 qpair failed and we were unable to recover it. 00:34:14.152 [2024-07-26 01:16:44.410311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.152 [2024-07-26 01:16:44.410338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.152 qpair failed and we were unable to recover it. 00:34:14.152 [2024-07-26 01:16:44.410441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.152 [2024-07-26 01:16:44.410468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.152 qpair failed and we were unable to recover it. 00:34:14.152 [2024-07-26 01:16:44.410567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.152 [2024-07-26 01:16:44.410593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.152 qpair failed and we were unable to recover it. 00:34:14.152 [2024-07-26 01:16:44.410731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.152 [2024-07-26 01:16:44.410764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.152 qpair failed and we were unable to recover it. 00:34:14.152 [2024-07-26 01:16:44.410865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.152 [2024-07-26 01:16:44.410892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.152 qpair failed and we were unable to recover it. 00:34:14.152 [2024-07-26 01:16:44.411028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.153 [2024-07-26 01:16:44.411054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.153 qpair failed and we were unable to recover it. 00:34:14.153 [2024-07-26 01:16:44.411170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.153 [2024-07-26 01:16:44.411196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.153 qpair failed and we were unable to recover it. 00:34:14.153 [2024-07-26 01:16:44.411294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.153 [2024-07-26 01:16:44.411320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.153 qpair failed and we were unable to recover it. 00:34:14.153 [2024-07-26 01:16:44.411454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.153 [2024-07-26 01:16:44.411481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.153 qpair failed and we were unable to recover it. 00:34:14.153 [2024-07-26 01:16:44.411611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.153 [2024-07-26 01:16:44.411638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.153 qpair failed and we were unable to recover it. 00:34:14.153 [2024-07-26 01:16:44.411855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.153 [2024-07-26 01:16:44.411882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.153 qpair failed and we were unable to recover it. 00:34:14.153 [2024-07-26 01:16:44.412006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.153 [2024-07-26 01:16:44.412032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.153 qpair failed and we were unable to recover it. 00:34:14.153 [2024-07-26 01:16:44.412174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.153 [2024-07-26 01:16:44.412200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.153 qpair failed and we were unable to recover it. 00:34:14.153 [2024-07-26 01:16:44.412336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.153 [2024-07-26 01:16:44.412362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.153 qpair failed and we were unable to recover it. 00:34:14.153 [2024-07-26 01:16:44.412499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.153 [2024-07-26 01:16:44.412526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.153 qpair failed and we were unable to recover it. 00:34:14.153 [2024-07-26 01:16:44.412634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.153 [2024-07-26 01:16:44.412661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.153 qpair failed and we were unable to recover it. 00:34:14.153 [2024-07-26 01:16:44.412761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.153 [2024-07-26 01:16:44.412787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.153 qpair failed and we were unable to recover it. 00:34:14.153 [2024-07-26 01:16:44.412944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.153 [2024-07-26 01:16:44.412985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.153 qpair failed and we were unable to recover it. 00:34:14.153 [2024-07-26 01:16:44.413137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.153 [2024-07-26 01:16:44.413169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.153 qpair failed and we were unable to recover it. 00:34:14.153 [2024-07-26 01:16:44.413279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.153 [2024-07-26 01:16:44.413307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.153 qpair failed and we were unable to recover it. 00:34:14.153 [2024-07-26 01:16:44.413417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.153 [2024-07-26 01:16:44.413444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.153 qpair failed and we were unable to recover it. 00:34:14.153 [2024-07-26 01:16:44.413574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.153 [2024-07-26 01:16:44.413600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.153 qpair failed and we were unable to recover it. 00:34:14.153 [2024-07-26 01:16:44.413717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.153 [2024-07-26 01:16:44.413745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.153 qpair failed and we were unable to recover it. 00:34:14.153 [2024-07-26 01:16:44.413923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.153 [2024-07-26 01:16:44.413951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.153 qpair failed and we were unable to recover it. 00:34:14.153 [2024-07-26 01:16:44.414089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.153 [2024-07-26 01:16:44.414115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.153 qpair failed and we were unable to recover it. 00:34:14.153 [2024-07-26 01:16:44.414223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.153 [2024-07-26 01:16:44.414250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.153 qpair failed and we were unable to recover it. 00:34:14.153 [2024-07-26 01:16:44.414351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.153 [2024-07-26 01:16:44.414378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.153 qpair failed and we were unable to recover it. 00:34:14.153 [2024-07-26 01:16:44.414480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.153 [2024-07-26 01:16:44.414507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.153 qpair failed and we were unable to recover it. 00:34:14.153 [2024-07-26 01:16:44.414675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.153 [2024-07-26 01:16:44.414701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.153 qpair failed and we were unable to recover it. 00:34:14.153 [2024-07-26 01:16:44.414860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.153 [2024-07-26 01:16:44.414886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.153 qpair failed and we were unable to recover it. 00:34:14.153 [2024-07-26 01:16:44.415018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.153 [2024-07-26 01:16:44.415050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.153 qpair failed and we were unable to recover it. 00:34:14.153 [2024-07-26 01:16:44.415169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.153 [2024-07-26 01:16:44.415196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.153 qpair failed and we were unable to recover it. 00:34:14.153 [2024-07-26 01:16:44.415407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.153 [2024-07-26 01:16:44.415433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.153 qpair failed and we were unable to recover it. 00:34:14.153 [2024-07-26 01:16:44.415543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.153 [2024-07-26 01:16:44.415569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.153 qpair failed and we were unable to recover it. 00:34:14.153 [2024-07-26 01:16:44.415729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.153 [2024-07-26 01:16:44.415755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.153 qpair failed and we were unable to recover it. 00:34:14.153 [2024-07-26 01:16:44.415902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.153 [2024-07-26 01:16:44.415928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.153 qpair failed and we were unable to recover it. 00:34:14.153 [2024-07-26 01:16:44.416089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.153 [2024-07-26 01:16:44.416118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.153 qpair failed and we were unable to recover it. 00:34:14.153 [2024-07-26 01:16:44.416221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.153 [2024-07-26 01:16:44.416247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.153 qpair failed and we were unable to recover it. 00:34:14.153 [2024-07-26 01:16:44.416368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.153 [2024-07-26 01:16:44.416395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.153 qpair failed and we were unable to recover it. 00:34:14.153 [2024-07-26 01:16:44.416557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.153 [2024-07-26 01:16:44.416583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.153 qpair failed and we were unable to recover it. 00:34:14.153 [2024-07-26 01:16:44.416717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.153 [2024-07-26 01:16:44.416752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.153 qpair failed and we were unable to recover it. 00:34:14.153 [2024-07-26 01:16:44.416851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.153 [2024-07-26 01:16:44.416877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.153 qpair failed and we were unable to recover it. 00:34:14.153 [2024-07-26 01:16:44.416998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.153 [2024-07-26 01:16:44.417024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.153 qpair failed and we were unable to recover it. 00:34:14.154 [2024-07-26 01:16:44.417157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.154 [2024-07-26 01:16:44.417198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.154 qpair failed and we were unable to recover it. 00:34:14.154 [2024-07-26 01:16:44.417342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.154 [2024-07-26 01:16:44.417372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.154 qpair failed and we were unable to recover it. 00:34:14.154 [2024-07-26 01:16:44.417507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.154 [2024-07-26 01:16:44.417534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.154 qpair failed and we were unable to recover it. 00:34:14.154 [2024-07-26 01:16:44.417645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.154 [2024-07-26 01:16:44.417672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.154 qpair failed and we were unable to recover it. 00:34:14.154 [2024-07-26 01:16:44.417805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.154 [2024-07-26 01:16:44.417832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.154 qpair failed and we were unable to recover it. 00:34:14.154 [2024-07-26 01:16:44.417990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.154 [2024-07-26 01:16:44.418017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.154 qpair failed and we were unable to recover it. 00:34:14.154 [2024-07-26 01:16:44.418128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.154 [2024-07-26 01:16:44.418155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.154 qpair failed and we were unable to recover it. 00:34:14.154 [2024-07-26 01:16:44.418300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.154 [2024-07-26 01:16:44.418331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.154 qpair failed and we were unable to recover it. 00:34:14.154 [2024-07-26 01:16:44.418467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.154 [2024-07-26 01:16:44.418495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.154 qpair failed and we were unable to recover it. 00:34:14.154 [2024-07-26 01:16:44.418661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.154 [2024-07-26 01:16:44.418688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.154 qpair failed and we were unable to recover it. 00:34:14.154 [2024-07-26 01:16:44.418817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.154 [2024-07-26 01:16:44.418845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.154 qpair failed and we were unable to recover it. 00:34:14.154 [2024-07-26 01:16:44.419008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.154 [2024-07-26 01:16:44.419037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.154 qpair failed and we were unable to recover it. 00:34:14.154 [2024-07-26 01:16:44.419186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.154 [2024-07-26 01:16:44.419213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.154 qpair failed and we were unable to recover it. 00:34:14.154 [2024-07-26 01:16:44.419343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.154 [2024-07-26 01:16:44.419370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.154 qpair failed and we were unable to recover it. 00:34:14.154 [2024-07-26 01:16:44.419533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.154 [2024-07-26 01:16:44.419565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.154 qpair failed and we were unable to recover it. 00:34:14.154 [2024-07-26 01:16:44.419669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.154 [2024-07-26 01:16:44.419697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.154 qpair failed and we were unable to recover it. 00:34:14.154 [2024-07-26 01:16:44.419805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.154 [2024-07-26 01:16:44.419832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.154 qpair failed and we were unable to recover it. 00:34:14.154 [2024-07-26 01:16:44.419945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.154 [2024-07-26 01:16:44.419978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.154 qpair failed and we were unable to recover it. 00:34:14.154 [2024-07-26 01:16:44.420118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.154 [2024-07-26 01:16:44.420145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.154 qpair failed and we were unable to recover it. 00:34:14.154 [2024-07-26 01:16:44.420306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.154 [2024-07-26 01:16:44.420333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.154 qpair failed and we were unable to recover it. 00:34:14.154 [2024-07-26 01:16:44.420445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.154 [2024-07-26 01:16:44.420472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.154 qpair failed and we were unable to recover it. 00:34:14.154 [2024-07-26 01:16:44.420641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.154 [2024-07-26 01:16:44.420670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.154 qpair failed and we were unable to recover it. 00:34:14.154 [2024-07-26 01:16:44.420783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.154 [2024-07-26 01:16:44.420821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.154 qpair failed and we were unable to recover it. 00:34:14.154 [2024-07-26 01:16:44.420994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.154 [2024-07-26 01:16:44.421021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.154 qpair failed and we were unable to recover it. 00:34:14.154 [2024-07-26 01:16:44.421139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.154 [2024-07-26 01:16:44.421169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.154 qpair failed and we were unable to recover it. 00:34:14.154 [2024-07-26 01:16:44.421285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.154 [2024-07-26 01:16:44.421312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.154 qpair failed and we were unable to recover it. 00:34:14.154 [2024-07-26 01:16:44.421452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.154 [2024-07-26 01:16:44.421480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.154 qpair failed and we were unable to recover it. 00:34:14.154 [2024-07-26 01:16:44.421621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.154 [2024-07-26 01:16:44.421648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.154 qpair failed and we were unable to recover it. 00:34:14.154 [2024-07-26 01:16:44.421763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.154 [2024-07-26 01:16:44.421791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.154 qpair failed and we were unable to recover it. 00:34:14.154 [2024-07-26 01:16:44.421930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.154 [2024-07-26 01:16:44.421957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.154 qpair failed and we were unable to recover it. 00:34:14.154 [2024-07-26 01:16:44.422121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.154 [2024-07-26 01:16:44.422149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.154 qpair failed and we were unable to recover it. 00:34:14.154 [2024-07-26 01:16:44.422262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.154 [2024-07-26 01:16:44.422289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.154 qpair failed and we were unable to recover it. 00:34:14.154 [2024-07-26 01:16:44.422396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.154 [2024-07-26 01:16:44.422426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.155 qpair failed and we were unable to recover it. 00:34:14.155 [2024-07-26 01:16:44.422548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.155 [2024-07-26 01:16:44.422577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.155 qpair failed and we were unable to recover it. 00:34:14.155 [2024-07-26 01:16:44.422718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.155 [2024-07-26 01:16:44.422746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.155 qpair failed and we were unable to recover it. 00:34:14.155 [2024-07-26 01:16:44.422860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.155 [2024-07-26 01:16:44.422887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.155 qpair failed and we were unable to recover it. 00:34:14.155 [2024-07-26 01:16:44.423018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.155 [2024-07-26 01:16:44.423049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.155 qpair failed and we were unable to recover it. 00:34:14.155 [2024-07-26 01:16:44.423205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.155 [2024-07-26 01:16:44.423244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.155 qpair failed and we were unable to recover it. 00:34:14.155 [2024-07-26 01:16:44.423379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.155 [2024-07-26 01:16:44.423407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.155 qpair failed and we were unable to recover it. 00:34:14.155 [2024-07-26 01:16:44.423520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.155 [2024-07-26 01:16:44.423548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.155 qpair failed and we were unable to recover it. 00:34:14.155 [2024-07-26 01:16:44.423713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.155 [2024-07-26 01:16:44.423739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.155 qpair failed and we were unable to recover it. 00:34:14.155 [2024-07-26 01:16:44.423841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.155 [2024-07-26 01:16:44.423873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.155 qpair failed and we were unable to recover it. 00:34:14.155 [2024-07-26 01:16:44.423997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.155 [2024-07-26 01:16:44.424038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.155 qpair failed and we were unable to recover it. 00:34:14.155 [2024-07-26 01:16:44.424186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.155 [2024-07-26 01:16:44.424214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.155 qpair failed and we were unable to recover it. 00:34:14.155 [2024-07-26 01:16:44.424335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.155 [2024-07-26 01:16:44.424362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.155 qpair failed and we were unable to recover it. 00:34:14.155 [2024-07-26 01:16:44.424521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.155 [2024-07-26 01:16:44.424548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.155 qpair failed and we were unable to recover it. 00:34:14.155 [2024-07-26 01:16:44.424705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.155 [2024-07-26 01:16:44.424732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.155 qpair failed and we were unable to recover it. 00:34:14.155 [2024-07-26 01:16:44.424859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.155 [2024-07-26 01:16:44.424886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.155 qpair failed and we were unable to recover it. 00:34:14.155 [2024-07-26 01:16:44.424991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.155 [2024-07-26 01:16:44.425019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.155 qpair failed and we were unable to recover it. 00:34:14.155 [2024-07-26 01:16:44.425165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.155 [2024-07-26 01:16:44.425194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.155 qpair failed and we were unable to recover it. 00:34:14.155 [2024-07-26 01:16:44.425358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.155 [2024-07-26 01:16:44.425386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.155 qpair failed and we were unable to recover it. 00:34:14.155 [2024-07-26 01:16:44.425524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.155 [2024-07-26 01:16:44.425551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.155 qpair failed and we were unable to recover it. 00:34:14.155 [2024-07-26 01:16:44.425693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.155 [2024-07-26 01:16:44.425721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.155 qpair failed and we were unable to recover it. 00:34:14.155 [2024-07-26 01:16:44.425883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.155 [2024-07-26 01:16:44.425910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.155 qpair failed and we were unable to recover it. 00:34:14.155 [2024-07-26 01:16:44.426071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.155 [2024-07-26 01:16:44.426099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.155 qpair failed and we were unable to recover it. 00:34:14.155 [2024-07-26 01:16:44.426235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.155 [2024-07-26 01:16:44.426263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.155 qpair failed and we were unable to recover it. 00:34:14.155 [2024-07-26 01:16:44.426400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.155 [2024-07-26 01:16:44.426427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.155 qpair failed and we were unable to recover it. 00:34:14.155 [2024-07-26 01:16:44.426542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.155 [2024-07-26 01:16:44.426569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.155 qpair failed and we were unable to recover it. 00:34:14.155 [2024-07-26 01:16:44.426685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.155 [2024-07-26 01:16:44.426715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.155 qpair failed and we were unable to recover it. 00:34:14.155 [2024-07-26 01:16:44.426854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.155 [2024-07-26 01:16:44.426881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.155 qpair failed and we were unable to recover it. 00:34:14.155 [2024-07-26 01:16:44.427038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.155 [2024-07-26 01:16:44.427072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.155 qpair failed and we were unable to recover it. 00:34:14.155 [2024-07-26 01:16:44.427185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.155 [2024-07-26 01:16:44.427211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.155 qpair failed and we were unable to recover it. 00:34:14.155 [2024-07-26 01:16:44.427373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.155 [2024-07-26 01:16:44.427401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.155 qpair failed and we were unable to recover it. 00:34:14.155 [2024-07-26 01:16:44.427542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.155 [2024-07-26 01:16:44.427569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.155 qpair failed and we were unable to recover it. 00:34:14.155 [2024-07-26 01:16:44.427702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.155 [2024-07-26 01:16:44.427729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.155 qpair failed and we were unable to recover it. 00:34:14.155 [2024-07-26 01:16:44.427842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.155 [2024-07-26 01:16:44.427870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.155 qpair failed and we were unable to recover it. 00:34:14.155 [2024-07-26 01:16:44.428032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.155 [2024-07-26 01:16:44.428066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.155 qpair failed and we were unable to recover it. 00:34:14.155 [2024-07-26 01:16:44.428182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.155 [2024-07-26 01:16:44.428212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.155 qpair failed and we were unable to recover it. 00:34:14.155 [2024-07-26 01:16:44.428353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.155 [2024-07-26 01:16:44.428381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.155 qpair failed and we were unable to recover it. 00:34:14.155 [2024-07-26 01:16:44.428518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.155 [2024-07-26 01:16:44.428546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.155 qpair failed and we were unable to recover it. 00:34:14.156 [2024-07-26 01:16:44.428677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.156 [2024-07-26 01:16:44.428704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.156 qpair failed and we were unable to recover it. 00:34:14.156 [2024-07-26 01:16:44.428806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.156 [2024-07-26 01:16:44.428833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.156 qpair failed and we were unable to recover it. 00:34:14.156 [2024-07-26 01:16:44.428938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.156 [2024-07-26 01:16:44.428966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.156 qpair failed and we were unable to recover it. 00:34:14.156 [2024-07-26 01:16:44.429112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.156 [2024-07-26 01:16:44.429152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.156 qpair failed and we were unable to recover it. 00:34:14.156 [2024-07-26 01:16:44.429287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.156 [2024-07-26 01:16:44.429314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.156 qpair failed and we were unable to recover it. 00:34:14.156 [2024-07-26 01:16:44.429475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.156 [2024-07-26 01:16:44.429502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.156 qpair failed and we were unable to recover it. 00:34:14.156 [2024-07-26 01:16:44.429663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.156 [2024-07-26 01:16:44.429690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.156 qpair failed and we were unable to recover it. 00:34:14.156 [2024-07-26 01:16:44.429848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.156 [2024-07-26 01:16:44.429875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.156 qpair failed and we were unable to recover it. 00:34:14.156 [2024-07-26 01:16:44.429975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.156 [2024-07-26 01:16:44.430001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.156 qpair failed and we were unable to recover it. 00:34:14.156 [2024-07-26 01:16:44.430175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.156 [2024-07-26 01:16:44.430205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.156 qpair failed and we were unable to recover it. 00:34:14.156 [2024-07-26 01:16:44.430348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.156 [2024-07-26 01:16:44.430375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.156 qpair failed and we were unable to recover it. 00:34:14.156 [2024-07-26 01:16:44.430507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.156 [2024-07-26 01:16:44.430539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.156 qpair failed and we were unable to recover it. 00:34:14.156 [2024-07-26 01:16:44.430658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.156 [2024-07-26 01:16:44.430695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.156 qpair failed and we were unable to recover it. 00:34:14.156 [2024-07-26 01:16:44.430854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.156 [2024-07-26 01:16:44.430881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.156 qpair failed and we were unable to recover it. 00:34:14.156 [2024-07-26 01:16:44.430993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.156 [2024-07-26 01:16:44.431021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.156 qpair failed and we were unable to recover it. 00:34:14.156 [2024-07-26 01:16:44.431165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.156 [2024-07-26 01:16:44.431192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.156 qpair failed and we were unable to recover it. 00:34:14.156 [2024-07-26 01:16:44.431332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.156 [2024-07-26 01:16:44.431362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.156 qpair failed and we were unable to recover it. 00:34:14.156 [2024-07-26 01:16:44.431526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.156 [2024-07-26 01:16:44.431553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.156 qpair failed and we were unable to recover it. 00:34:14.156 [2024-07-26 01:16:44.431669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.156 [2024-07-26 01:16:44.431695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.156 qpair failed and we were unable to recover it. 00:34:14.156 [2024-07-26 01:16:44.431800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.156 [2024-07-26 01:16:44.431827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.156 qpair failed and we were unable to recover it. 00:34:14.156 [2024-07-26 01:16:44.431954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.156 [2024-07-26 01:16:44.431981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.156 qpair failed and we were unable to recover it. 00:34:14.156 [2024-07-26 01:16:44.432094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.156 [2024-07-26 01:16:44.432122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.156 qpair failed and we were unable to recover it. 00:34:14.156 [2024-07-26 01:16:44.432237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.156 [2024-07-26 01:16:44.432265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.156 qpair failed and we were unable to recover it. 00:34:14.156 [2024-07-26 01:16:44.432426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.156 [2024-07-26 01:16:44.432454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.156 qpair failed and we were unable to recover it. 00:34:14.156 [2024-07-26 01:16:44.432595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.156 [2024-07-26 01:16:44.432628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.156 qpair failed and we were unable to recover it. 00:34:14.156 [2024-07-26 01:16:44.432736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.156 [2024-07-26 01:16:44.432762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.156 qpair failed and we were unable to recover it. 00:34:14.156 [2024-07-26 01:16:44.432877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.156 [2024-07-26 01:16:44.432904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.156 qpair failed and we were unable to recover it. 00:34:14.156 [2024-07-26 01:16:44.433017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.156 [2024-07-26 01:16:44.433057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.156 qpair failed and we were unable to recover it. 00:34:14.156 [2024-07-26 01:16:44.433168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.156 [2024-07-26 01:16:44.433195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.156 qpair failed and we were unable to recover it. 00:34:14.156 [2024-07-26 01:16:44.433331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.156 [2024-07-26 01:16:44.433359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.156 qpair failed and we were unable to recover it. 00:34:14.156 [2024-07-26 01:16:44.433508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.156 [2024-07-26 01:16:44.433535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.156 qpair failed and we were unable to recover it. 00:34:14.156 [2024-07-26 01:16:44.433650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.156 [2024-07-26 01:16:44.433677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.156 qpair failed and we were unable to recover it. 00:34:14.156 [2024-07-26 01:16:44.433812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.156 [2024-07-26 01:16:44.433853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.156 qpair failed and we were unable to recover it. 00:34:14.156 [2024-07-26 01:16:44.433994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.156 [2024-07-26 01:16:44.434023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.156 qpair failed and we were unable to recover it. 00:34:14.156 [2024-07-26 01:16:44.434180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.156 [2024-07-26 01:16:44.434208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.156 qpair failed and we were unable to recover it. 00:34:14.156 [2024-07-26 01:16:44.434317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.156 [2024-07-26 01:16:44.434344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.156 qpair failed and we were unable to recover it. 00:34:14.156 [2024-07-26 01:16:44.434466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.156 [2024-07-26 01:16:44.434492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.156 qpair failed and we were unable to recover it. 00:34:14.156 [2024-07-26 01:16:44.434652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.156 [2024-07-26 01:16:44.434679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.157 qpair failed and we were unable to recover it. 00:34:14.157 [2024-07-26 01:16:44.434800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.157 [2024-07-26 01:16:44.434828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.157 qpair failed and we were unable to recover it. 00:34:14.157 [2024-07-26 01:16:44.434960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.157 [2024-07-26 01:16:44.434987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.157 qpair failed and we were unable to recover it. 00:34:14.157 [2024-07-26 01:16:44.435098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.157 [2024-07-26 01:16:44.435125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.157 qpair failed and we were unable to recover it. 00:34:14.157 [2024-07-26 01:16:44.435261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.157 [2024-07-26 01:16:44.435288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.157 qpair failed and we were unable to recover it. 00:34:14.157 [2024-07-26 01:16:44.435399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.157 [2024-07-26 01:16:44.435426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.157 qpair failed and we were unable to recover it. 00:34:14.157 [2024-07-26 01:16:44.435560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.157 [2024-07-26 01:16:44.435588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.157 qpair failed and we were unable to recover it. 00:34:14.157 [2024-07-26 01:16:44.435697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.157 [2024-07-26 01:16:44.435724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.157 qpair failed and we were unable to recover it. 00:34:14.157 [2024-07-26 01:16:44.435833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.157 [2024-07-26 01:16:44.435861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.157 qpair failed and we were unable to recover it. 00:34:14.157 [2024-07-26 01:16:44.435993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.157 [2024-07-26 01:16:44.436020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.157 qpair failed and we were unable to recover it. 00:34:14.157 [2024-07-26 01:16:44.436144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.157 [2024-07-26 01:16:44.436173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.157 qpair failed and we were unable to recover it. 00:34:14.157 [2024-07-26 01:16:44.436316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.157 [2024-07-26 01:16:44.436348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.157 qpair failed and we were unable to recover it. 00:34:14.157 [2024-07-26 01:16:44.436453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.157 [2024-07-26 01:16:44.436481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.157 qpair failed and we were unable to recover it. 00:34:14.157 [2024-07-26 01:16:44.436636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.157 [2024-07-26 01:16:44.436663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.157 qpair failed and we were unable to recover it. 00:34:14.157 [2024-07-26 01:16:44.436799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.157 [2024-07-26 01:16:44.436830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.157 qpair failed and we were unable to recover it. 00:34:14.157 [2024-07-26 01:16:44.436962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.157 [2024-07-26 01:16:44.436989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.157 qpair failed and we were unable to recover it. 00:34:14.157 [2024-07-26 01:16:44.437134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.157 [2024-07-26 01:16:44.437161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.157 qpair failed and we were unable to recover it. 00:34:14.157 [2024-07-26 01:16:44.437308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.157 [2024-07-26 01:16:44.437340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.157 qpair failed and we were unable to recover it. 00:34:14.157 [2024-07-26 01:16:44.437449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.157 [2024-07-26 01:16:44.437477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.157 qpair failed and we were unable to recover it. 00:34:14.157 [2024-07-26 01:16:44.437616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.157 [2024-07-26 01:16:44.437644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.157 qpair failed and we were unable to recover it. 00:34:14.157 [2024-07-26 01:16:44.437765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.157 [2024-07-26 01:16:44.437805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.157 qpair failed and we were unable to recover it. 00:34:14.157 [2024-07-26 01:16:44.437917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.157 [2024-07-26 01:16:44.437945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.157 qpair failed and we were unable to recover it. 00:34:14.157 [2024-07-26 01:16:44.438095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.157 [2024-07-26 01:16:44.438122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.157 qpair failed and we were unable to recover it. 00:34:14.157 [2024-07-26 01:16:44.438261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.157 [2024-07-26 01:16:44.438287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.157 qpair failed and we were unable to recover it. 00:34:14.157 [2024-07-26 01:16:44.438419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.157 [2024-07-26 01:16:44.438446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.157 qpair failed and we were unable to recover it. 00:34:14.157 [2024-07-26 01:16:44.438557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.157 [2024-07-26 01:16:44.438583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.157 qpair failed and we were unable to recover it. 00:34:14.157 [2024-07-26 01:16:44.438726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.157 [2024-07-26 01:16:44.438753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.157 qpair failed and we were unable to recover it. 00:34:14.157 [2024-07-26 01:16:44.438894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.157 [2024-07-26 01:16:44.438920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.157 qpair failed and we were unable to recover it. 00:34:14.157 [2024-07-26 01:16:44.439065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.157 [2024-07-26 01:16:44.439103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.157 qpair failed and we were unable to recover it. 00:34:14.157 [2024-07-26 01:16:44.439206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.157 [2024-07-26 01:16:44.439232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.157 qpair failed and we were unable to recover it. 00:34:14.157 [2024-07-26 01:16:44.439337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.157 [2024-07-26 01:16:44.439372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.157 qpair failed and we were unable to recover it. 00:34:14.157 [2024-07-26 01:16:44.439503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.157 [2024-07-26 01:16:44.439529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.157 qpair failed and we were unable to recover it. 00:34:14.157 [2024-07-26 01:16:44.439638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.157 [2024-07-26 01:16:44.439664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.157 qpair failed and we were unable to recover it. 00:34:14.157 [2024-07-26 01:16:44.439792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.157 [2024-07-26 01:16:44.439818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.157 qpair failed and we were unable to recover it. 00:34:14.157 [2024-07-26 01:16:44.439953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.157 [2024-07-26 01:16:44.439979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.157 qpair failed and we were unable to recover it. 00:34:14.157 [2024-07-26 01:16:44.440116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.157 [2024-07-26 01:16:44.440145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.157 qpair failed and we were unable to recover it. 00:34:14.157 [2024-07-26 01:16:44.440287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.157 [2024-07-26 01:16:44.440314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.157 qpair failed and we were unable to recover it. 00:34:14.157 [2024-07-26 01:16:44.440484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.157 [2024-07-26 01:16:44.440511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.157 qpair failed and we were unable to recover it. 00:34:14.157 [2024-07-26 01:16:44.440644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.158 [2024-07-26 01:16:44.440671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.158 qpair failed and we were unable to recover it. 00:34:14.158 [2024-07-26 01:16:44.440780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.158 [2024-07-26 01:16:44.440817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.158 qpair failed and we were unable to recover it. 00:34:14.158 [2024-07-26 01:16:44.440957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.158 [2024-07-26 01:16:44.440984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.158 qpair failed and we were unable to recover it. 00:34:14.158 [2024-07-26 01:16:44.441123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.158 [2024-07-26 01:16:44.441156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.158 qpair failed and we were unable to recover it. 00:34:14.158 [2024-07-26 01:16:44.441291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.158 [2024-07-26 01:16:44.441318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.158 qpair failed and we were unable to recover it. 00:34:14.158 [2024-07-26 01:16:44.441429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.158 [2024-07-26 01:16:44.441455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.158 qpair failed and we were unable to recover it. 00:34:14.158 [2024-07-26 01:16:44.441588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.158 [2024-07-26 01:16:44.441614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.158 qpair failed and we were unable to recover it. 00:34:14.158 [2024-07-26 01:16:44.441750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.158 [2024-07-26 01:16:44.441776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.158 qpair failed and we were unable to recover it. 00:34:14.158 [2024-07-26 01:16:44.441917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.158 [2024-07-26 01:16:44.441943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.158 qpair failed and we were unable to recover it. 00:34:14.158 [2024-07-26 01:16:44.442073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.158 [2024-07-26 01:16:44.442105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.158 qpair failed and we were unable to recover it. 00:34:14.158 [2024-07-26 01:16:44.442233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.158 [2024-07-26 01:16:44.442259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.158 qpair failed and we were unable to recover it. 00:34:14.158 [2024-07-26 01:16:44.442366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.158 [2024-07-26 01:16:44.442403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.158 qpair failed and we were unable to recover it. 00:34:14.158 [2024-07-26 01:16:44.442523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.158 [2024-07-26 01:16:44.442552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.158 qpair failed and we were unable to recover it. 00:34:14.158 [2024-07-26 01:16:44.442692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.158 [2024-07-26 01:16:44.442720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.158 qpair failed and we were unable to recover it. 00:34:14.158 [2024-07-26 01:16:44.442863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.158 [2024-07-26 01:16:44.442892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.158 qpair failed and we were unable to recover it. 00:34:14.158 [2024-07-26 01:16:44.443025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.158 [2024-07-26 01:16:44.443052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.158 qpair failed and we were unable to recover it. 00:34:14.158 [2024-07-26 01:16:44.443197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.158 [2024-07-26 01:16:44.443225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.158 qpair failed and we were unable to recover it. 00:34:14.158 [2024-07-26 01:16:44.443375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.158 [2024-07-26 01:16:44.443406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.158 qpair failed and we were unable to recover it. 00:34:14.158 [2024-07-26 01:16:44.443566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.158 [2024-07-26 01:16:44.443593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.158 qpair failed and we were unable to recover it. 00:34:14.158 [2024-07-26 01:16:44.443704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.158 [2024-07-26 01:16:44.443732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.158 qpair failed and we were unable to recover it. 00:34:14.158 [2024-07-26 01:16:44.443847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.158 [2024-07-26 01:16:44.443885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.158 qpair failed and we were unable to recover it. 00:34:14.158 [2024-07-26 01:16:44.444044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.158 [2024-07-26 01:16:44.444077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.158 qpair failed and we were unable to recover it. 00:34:14.158 [2024-07-26 01:16:44.444220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.158 [2024-07-26 01:16:44.444246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.158 qpair failed and we were unable to recover it. 00:34:14.158 [2024-07-26 01:16:44.444379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.158 [2024-07-26 01:16:44.444413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.158 qpair failed and we were unable to recover it. 00:34:14.158 [2024-07-26 01:16:44.444553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.158 [2024-07-26 01:16:44.444579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.158 qpair failed and we were unable to recover it. 00:34:14.158 [2024-07-26 01:16:44.444685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.158 [2024-07-26 01:16:44.444711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.158 qpair failed and we were unable to recover it. 00:34:14.158 [2024-07-26 01:16:44.444850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.158 [2024-07-26 01:16:44.444888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.158 qpair failed and we were unable to recover it. 00:34:14.158 [2024-07-26 01:16:44.445027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.158 [2024-07-26 01:16:44.445055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.158 qpair failed and we were unable to recover it. 00:34:14.158 [2024-07-26 01:16:44.445178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.158 [2024-07-26 01:16:44.445206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.158 qpair failed and we were unable to recover it. 00:34:14.158 [2024-07-26 01:16:44.445365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.158 [2024-07-26 01:16:44.445393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.158 qpair failed and we were unable to recover it. 00:34:14.158 [2024-07-26 01:16:44.445526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.158 [2024-07-26 01:16:44.445558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.158 qpair failed and we were unable to recover it. 00:34:14.158 [2024-07-26 01:16:44.445707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.158 [2024-07-26 01:16:44.445735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.158 qpair failed and we were unable to recover it. 00:34:14.158 [2024-07-26 01:16:44.445906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.158 [2024-07-26 01:16:44.445933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.158 qpair failed and we were unable to recover it. 00:34:14.158 [2024-07-26 01:16:44.446088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.158 [2024-07-26 01:16:44.446121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.158 qpair failed and we were unable to recover it. 00:34:14.158 [2024-07-26 01:16:44.446255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.158 [2024-07-26 01:16:44.446282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.158 qpair failed and we were unable to recover it. 00:34:14.158 [2024-07-26 01:16:44.446390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.158 [2024-07-26 01:16:44.446415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.158 qpair failed and we were unable to recover it. 00:34:14.158 [2024-07-26 01:16:44.446524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.158 [2024-07-26 01:16:44.446550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.158 qpair failed and we were unable to recover it. 00:34:14.158 [2024-07-26 01:16:44.446682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.158 [2024-07-26 01:16:44.446708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.158 qpair failed and we were unable to recover it. 00:34:14.158 [2024-07-26 01:16:44.446819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.159 [2024-07-26 01:16:44.446849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.159 qpair failed and we were unable to recover it. 00:34:14.159 [2024-07-26 01:16:44.446977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.159 [2024-07-26 01:16:44.447005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.159 qpair failed and we were unable to recover it. 00:34:14.159 [2024-07-26 01:16:44.447165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.159 [2024-07-26 01:16:44.447193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.159 qpair failed and we were unable to recover it. 00:34:14.159 [2024-07-26 01:16:44.447329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.159 [2024-07-26 01:16:44.447360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.159 qpair failed and we were unable to recover it. 00:34:14.159 [2024-07-26 01:16:44.447495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.159 [2024-07-26 01:16:44.447522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.159 qpair failed and we were unable to recover it. 00:34:14.159 [2024-07-26 01:16:44.447660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.159 [2024-07-26 01:16:44.447688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.159 qpair failed and we were unable to recover it. 00:34:14.159 [2024-07-26 01:16:44.447835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.159 [2024-07-26 01:16:44.447862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.159 qpair failed and we were unable to recover it. 00:34:14.159 [2024-07-26 01:16:44.448013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.159 [2024-07-26 01:16:44.448040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.159 qpair failed and we were unable to recover it. 00:34:14.159 [2024-07-26 01:16:44.448186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.159 [2024-07-26 01:16:44.448212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.159 qpair failed and we were unable to recover it. 00:34:14.159 [2024-07-26 01:16:44.448328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.159 [2024-07-26 01:16:44.448355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.159 qpair failed and we were unable to recover it. 00:34:14.159 [2024-07-26 01:16:44.448493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.159 [2024-07-26 01:16:44.448524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.159 qpair failed and we were unable to recover it. 00:34:14.159 [2024-07-26 01:16:44.448657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.159 [2024-07-26 01:16:44.448683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.159 qpair failed and we were unable to recover it. 00:34:14.159 [2024-07-26 01:16:44.448796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.159 [2024-07-26 01:16:44.448823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.159 qpair failed and we were unable to recover it. 00:34:14.159 [2024-07-26 01:16:44.448963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.159 [2024-07-26 01:16:44.448989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.159 qpair failed and we were unable to recover it. 00:34:14.159 [2024-07-26 01:16:44.449149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.159 [2024-07-26 01:16:44.449176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.159 qpair failed and we were unable to recover it. 00:34:14.159 [2024-07-26 01:16:44.449272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.159 [2024-07-26 01:16:44.449298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.159 qpair failed and we were unable to recover it. 00:34:14.159 [2024-07-26 01:16:44.449437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.159 [2024-07-26 01:16:44.449463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.159 qpair failed and we were unable to recover it. 00:34:14.159 [2024-07-26 01:16:44.449619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.159 [2024-07-26 01:16:44.449646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.159 qpair failed and we were unable to recover it. 00:34:14.159 [2024-07-26 01:16:44.449792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.159 [2024-07-26 01:16:44.449818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.159 qpair failed and we were unable to recover it. 00:34:14.159 [2024-07-26 01:16:44.449957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.159 [2024-07-26 01:16:44.449988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.159 qpair failed and we were unable to recover it. 00:34:14.159 [2024-07-26 01:16:44.450137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.159 [2024-07-26 01:16:44.450164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.159 qpair failed and we were unable to recover it. 00:34:14.159 [2024-07-26 01:16:44.450307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.159 [2024-07-26 01:16:44.450338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.159 qpair failed and we were unable to recover it. 00:34:14.159 [2024-07-26 01:16:44.450482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.159 [2024-07-26 01:16:44.450518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.159 qpair failed and we were unable to recover it. 00:34:14.159 [2024-07-26 01:16:44.450652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.159 [2024-07-26 01:16:44.450678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.159 qpair failed and we were unable to recover it. 00:34:14.159 [2024-07-26 01:16:44.450813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.159 [2024-07-26 01:16:44.450840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.159 qpair failed and we were unable to recover it. 00:34:14.159 [2024-07-26 01:16:44.451027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.159 [2024-07-26 01:16:44.451054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.159 qpair failed and we were unable to recover it. 00:34:14.159 [2024-07-26 01:16:44.451177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.159 [2024-07-26 01:16:44.451203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.159 qpair failed and we were unable to recover it. 00:34:14.159 [2024-07-26 01:16:44.451345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.159 [2024-07-26 01:16:44.451383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.159 qpair failed and we were unable to recover it. 00:34:14.159 [2024-07-26 01:16:44.451547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.159 [2024-07-26 01:16:44.451574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.159 qpair failed and we were unable to recover it. 00:34:14.159 [2024-07-26 01:16:44.451706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.159 [2024-07-26 01:16:44.451732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.159 qpair failed and we were unable to recover it. 00:34:14.159 [2024-07-26 01:16:44.451866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.159 [2024-07-26 01:16:44.451892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.159 qpair failed and we were unable to recover it. 00:34:14.159 [2024-07-26 01:16:44.452015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.159 [2024-07-26 01:16:44.452056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.159 qpair failed and we were unable to recover it. 00:34:14.159 [2024-07-26 01:16:44.452219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.159 [2024-07-26 01:16:44.452258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.159 qpair failed and we were unable to recover it. 00:34:14.159 [2024-07-26 01:16:44.452415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.160 [2024-07-26 01:16:44.452443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.160 qpair failed and we were unable to recover it. 00:34:14.160 [2024-07-26 01:16:44.452557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.160 [2024-07-26 01:16:44.452590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.160 qpair failed and we were unable to recover it. 00:34:14.160 [2024-07-26 01:16:44.452759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.160 [2024-07-26 01:16:44.452787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.160 qpair failed and we were unable to recover it. 00:34:14.160 [2024-07-26 01:16:44.452894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.160 [2024-07-26 01:16:44.452921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.160 qpair failed and we were unable to recover it. 00:34:14.160 [2024-07-26 01:16:44.453068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.160 [2024-07-26 01:16:44.453106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.160 qpair failed and we were unable to recover it. 00:34:14.160 [2024-07-26 01:16:44.453221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.160 [2024-07-26 01:16:44.453250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.160 qpair failed and we were unable to recover it. 00:34:14.160 [2024-07-26 01:16:44.453410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.160 [2024-07-26 01:16:44.453437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.160 qpair failed and we were unable to recover it. 00:34:14.160 [2024-07-26 01:16:44.453575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.160 [2024-07-26 01:16:44.453602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.160 qpair failed and we were unable to recover it. 00:34:14.160 [2024-07-26 01:16:44.453704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.160 [2024-07-26 01:16:44.453731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.160 qpair failed and we were unable to recover it. 00:34:14.160 [2024-07-26 01:16:44.453837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.160 [2024-07-26 01:16:44.453864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.160 qpair failed and we were unable to recover it. 00:34:14.160 [2024-07-26 01:16:44.453968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.160 [2024-07-26 01:16:44.453995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.160 qpair failed and we were unable to recover it. 00:34:14.160 [2024-07-26 01:16:44.454136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.160 [2024-07-26 01:16:44.454164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.160 qpair failed and we were unable to recover it. 00:34:14.160 [2024-07-26 01:16:44.454296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.160 [2024-07-26 01:16:44.454327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.160 qpair failed and we were unable to recover it. 00:34:14.160 [2024-07-26 01:16:44.454474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.160 [2024-07-26 01:16:44.454507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.160 qpair failed and we were unable to recover it. 00:34:14.160 [2024-07-26 01:16:44.454646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.160 [2024-07-26 01:16:44.454674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.160 qpair failed and we were unable to recover it. 00:34:14.160 [2024-07-26 01:16:44.454803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.160 [2024-07-26 01:16:44.454833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.160 qpair failed and we were unable to recover it. 00:34:14.160 [2024-07-26 01:16:44.454978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.160 [2024-07-26 01:16:44.455006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.160 qpair failed and we were unable to recover it. 00:34:14.160 [2024-07-26 01:16:44.455136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.160 [2024-07-26 01:16:44.455164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.160 qpair failed and we were unable to recover it. 00:34:14.160 [2024-07-26 01:16:44.455287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.160 [2024-07-26 01:16:44.455324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.160 qpair failed and we were unable to recover it. 00:34:14.160 [2024-07-26 01:16:44.455487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.160 [2024-07-26 01:16:44.455513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.160 qpair failed and we were unable to recover it. 00:34:14.160 [2024-07-26 01:16:44.455621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.160 [2024-07-26 01:16:44.455647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.160 qpair failed and we were unable to recover it. 00:34:14.160 [2024-07-26 01:16:44.455779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.160 [2024-07-26 01:16:44.455805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.160 qpair failed and we were unable to recover it. 00:34:14.160 [2024-07-26 01:16:44.455933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.160 [2024-07-26 01:16:44.455960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.160 qpair failed and we were unable to recover it. 00:34:14.160 [2024-07-26 01:16:44.456101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.160 [2024-07-26 01:16:44.456127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.160 qpair failed and we were unable to recover it. 00:34:14.160 [2024-07-26 01:16:44.456239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.160 [2024-07-26 01:16:44.456265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.160 qpair failed and we were unable to recover it. 00:34:14.160 [2024-07-26 01:16:44.456364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.160 [2024-07-26 01:16:44.456390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.160 qpair failed and we were unable to recover it. 00:34:14.160 [2024-07-26 01:16:44.456506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.160 [2024-07-26 01:16:44.456532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.160 qpair failed and we were unable to recover it. 00:34:14.160 [2024-07-26 01:16:44.456669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.160 [2024-07-26 01:16:44.456696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.160 qpair failed and we were unable to recover it. 00:34:14.160 [2024-07-26 01:16:44.456833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.160 [2024-07-26 01:16:44.456865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.160 qpair failed and we were unable to recover it. 00:34:14.160 [2024-07-26 01:16:44.457004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.160 [2024-07-26 01:16:44.457030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.160 qpair failed and we were unable to recover it. 00:34:14.160 [2024-07-26 01:16:44.457165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.160 [2024-07-26 01:16:44.457192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.160 qpair failed and we were unable to recover it. 00:34:14.160 [2024-07-26 01:16:44.457333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.160 [2024-07-26 01:16:44.457359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.160 qpair failed and we were unable to recover it. 00:34:14.160 [2024-07-26 01:16:44.457492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.160 [2024-07-26 01:16:44.457518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.160 qpair failed and we were unable to recover it. 00:34:14.160 [2024-07-26 01:16:44.457653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.160 [2024-07-26 01:16:44.457679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.160 qpair failed and we were unable to recover it. 00:34:14.160 [2024-07-26 01:16:44.457842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.160 [2024-07-26 01:16:44.457868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.160 qpair failed and we were unable to recover it. 00:34:14.160 [2024-07-26 01:16:44.457967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.160 [2024-07-26 01:16:44.457993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.160 qpair failed and we were unable to recover it. 00:34:14.160 [2024-07-26 01:16:44.458095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.160 [2024-07-26 01:16:44.458123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.160 qpair failed and we were unable to recover it. 00:34:14.160 [2024-07-26 01:16:44.458256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.160 [2024-07-26 01:16:44.458282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.161 qpair failed and we were unable to recover it. 00:34:14.161 [2024-07-26 01:16:44.458417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.161 [2024-07-26 01:16:44.458443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.161 qpair failed and we were unable to recover it. 00:34:14.161 [2024-07-26 01:16:44.458573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.161 [2024-07-26 01:16:44.458599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.161 qpair failed and we were unable to recover it. 00:34:14.161 [2024-07-26 01:16:44.458736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.161 [2024-07-26 01:16:44.458766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.161 qpair failed and we were unable to recover it. 00:34:14.161 [2024-07-26 01:16:44.458924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.161 [2024-07-26 01:16:44.458965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.161 qpair failed and we were unable to recover it. 00:34:14.161 [2024-07-26 01:16:44.459156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.161 [2024-07-26 01:16:44.459186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.161 qpair failed and we were unable to recover it. 00:34:14.161 [2024-07-26 01:16:44.459324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.161 [2024-07-26 01:16:44.459352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.161 qpair failed and we were unable to recover it. 00:34:14.161 [2024-07-26 01:16:44.459527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.161 [2024-07-26 01:16:44.459555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.161 qpair failed and we were unable to recover it. 00:34:14.161 [2024-07-26 01:16:44.459699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.161 [2024-07-26 01:16:44.459726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.161 qpair failed and we were unable to recover it. 00:34:14.161 [2024-07-26 01:16:44.459837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.161 [2024-07-26 01:16:44.459865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.161 qpair failed and we were unable to recover it. 00:34:14.161 [2024-07-26 01:16:44.460012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.161 [2024-07-26 01:16:44.460038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.161 qpair failed and we were unable to recover it. 00:34:14.161 [2024-07-26 01:16:44.460172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.161 [2024-07-26 01:16:44.460212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.161 qpair failed and we were unable to recover it. 00:34:14.161 [2024-07-26 01:16:44.460327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.161 [2024-07-26 01:16:44.460355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.161 qpair failed and we were unable to recover it. 00:34:14.161 [2024-07-26 01:16:44.460464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.161 [2024-07-26 01:16:44.460496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.161 qpair failed and we were unable to recover it. 00:34:14.161 [2024-07-26 01:16:44.460658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.161 [2024-07-26 01:16:44.460685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.161 qpair failed and we were unable to recover it. 00:34:14.161 [2024-07-26 01:16:44.460843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.161 [2024-07-26 01:16:44.460870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.161 qpair failed and we were unable to recover it. 00:34:14.161 [2024-07-26 01:16:44.461008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.161 [2024-07-26 01:16:44.461037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.161 qpair failed and we were unable to recover it. 00:34:14.161 [2024-07-26 01:16:44.461218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.161 [2024-07-26 01:16:44.461246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.161 qpair failed and we were unable to recover it. 00:34:14.161 [2024-07-26 01:16:44.461367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.161 [2024-07-26 01:16:44.461405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.161 qpair failed and we were unable to recover it. 00:34:14.161 [2024-07-26 01:16:44.461507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.161 [2024-07-26 01:16:44.461532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.161 qpair failed and we were unable to recover it. 00:34:14.161 [2024-07-26 01:16:44.461689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.161 [2024-07-26 01:16:44.461716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.161 qpair failed and we were unable to recover it. 00:34:14.161 [2024-07-26 01:16:44.461878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.161 [2024-07-26 01:16:44.461904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.161 qpair failed and we were unable to recover it. 00:34:14.161 [2024-07-26 01:16:44.462024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.161 [2024-07-26 01:16:44.462050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.161 qpair failed and we were unable to recover it. 00:34:14.161 [2024-07-26 01:16:44.462163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.161 [2024-07-26 01:16:44.462189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.161 qpair failed and we were unable to recover it. 00:34:14.161 [2024-07-26 01:16:44.462298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.161 [2024-07-26 01:16:44.462327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.161 qpair failed and we were unable to recover it. 00:34:14.161 [2024-07-26 01:16:44.462467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.161 [2024-07-26 01:16:44.462493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.161 qpair failed and we were unable to recover it. 00:34:14.161 [2024-07-26 01:16:44.462601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.161 [2024-07-26 01:16:44.462628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.161 qpair failed and we were unable to recover it. 00:34:14.161 [2024-07-26 01:16:44.462747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.161 [2024-07-26 01:16:44.462774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.161 qpair failed and we were unable to recover it. 00:34:14.161 [2024-07-26 01:16:44.462943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.161 [2024-07-26 01:16:44.462970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.161 qpair failed and we were unable to recover it. 00:34:14.161 [2024-07-26 01:16:44.463121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.161 [2024-07-26 01:16:44.463162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.161 qpair failed and we were unable to recover it. 00:34:14.161 [2024-07-26 01:16:44.463332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.161 [2024-07-26 01:16:44.463369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.161 qpair failed and we were unable to recover it. 00:34:14.161 [2024-07-26 01:16:44.463478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.161 [2024-07-26 01:16:44.463507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.161 qpair failed and we were unable to recover it. 00:34:14.161 [2024-07-26 01:16:44.463681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.161 [2024-07-26 01:16:44.463709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.161 qpair failed and we were unable to recover it. 00:34:14.161 [2024-07-26 01:16:44.463829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.161 [2024-07-26 01:16:44.463866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.161 qpair failed and we were unable to recover it. 00:34:14.161 [2024-07-26 01:16:44.463980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.161 [2024-07-26 01:16:44.464007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.161 qpair failed and we were unable to recover it. 00:34:14.161 [2024-07-26 01:16:44.464180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.161 [2024-07-26 01:16:44.464207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.161 qpair failed and we were unable to recover it. 00:34:14.161 [2024-07-26 01:16:44.464307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.161 [2024-07-26 01:16:44.464340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.161 qpair failed and we were unable to recover it. 00:34:14.161 [2024-07-26 01:16:44.464460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.161 [2024-07-26 01:16:44.464493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.161 qpair failed and we were unable to recover it. 00:34:14.161 [2024-07-26 01:16:44.464651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-26 01:16:44.464677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.162 qpair failed and we were unable to recover it. 00:34:14.162 [2024-07-26 01:16:44.464811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-26 01:16:44.464837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.162 qpair failed and we were unable to recover it. 00:34:14.162 [2024-07-26 01:16:44.464938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-26 01:16:44.464964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.162 qpair failed and we were unable to recover it. 00:34:14.162 [2024-07-26 01:16:44.465109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-26 01:16:44.465138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.162 qpair failed and we were unable to recover it. 00:34:14.162 [2024-07-26 01:16:44.465282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-26 01:16:44.465311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.162 qpair failed and we were unable to recover it. 00:34:14.162 [2024-07-26 01:16:44.465459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-26 01:16:44.465486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.162 qpair failed and we were unable to recover it. 00:34:14.162 [2024-07-26 01:16:44.465625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-26 01:16:44.465653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.162 qpair failed and we were unable to recover it. 00:34:14.162 [2024-07-26 01:16:44.465788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-26 01:16:44.465827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.162 qpair failed and we were unable to recover it. 00:34:14.162 [2024-07-26 01:16:44.465950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-26 01:16:44.465978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.162 qpair failed and we were unable to recover it. 00:34:14.162 [2024-07-26 01:16:44.466115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-26 01:16:44.466143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.162 qpair failed and we were unable to recover it. 00:34:14.162 [2024-07-26 01:16:44.466251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-26 01:16:44.466277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.162 qpair failed and we were unable to recover it. 00:34:14.162 [2024-07-26 01:16:44.466383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-26 01:16:44.466409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.162 qpair failed and we were unable to recover it. 00:34:14.162 [2024-07-26 01:16:44.466583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-26 01:16:44.466609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.162 qpair failed and we were unable to recover it. 00:34:14.162 [2024-07-26 01:16:44.466711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-26 01:16:44.466737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.162 qpair failed and we were unable to recover it. 00:34:14.162 [2024-07-26 01:16:44.466898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-26 01:16:44.466924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.162 qpair failed and we were unable to recover it. 00:34:14.162 [2024-07-26 01:16:44.467135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-26 01:16:44.467162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.162 qpair failed and we were unable to recover it. 00:34:14.162 [2024-07-26 01:16:44.467302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-26 01:16:44.467329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.162 qpair failed and we were unable to recover it. 00:34:14.162 [2024-07-26 01:16:44.467470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-26 01:16:44.467497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.162 qpair failed and we were unable to recover it. 00:34:14.162 [2024-07-26 01:16:44.467616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-26 01:16:44.467643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.162 qpair failed and we were unable to recover it. 00:34:14.162 [2024-07-26 01:16:44.467787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-26 01:16:44.467817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.162 qpair failed and we were unable to recover it. 00:34:14.162 [2024-07-26 01:16:44.467954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-26 01:16:44.467981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.162 qpair failed and we were unable to recover it. 00:34:14.162 [2024-07-26 01:16:44.468142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-26 01:16:44.468171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.162 qpair failed and we were unable to recover it. 00:34:14.162 [2024-07-26 01:16:44.468312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-26 01:16:44.468343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.162 qpair failed and we were unable to recover it. 00:34:14.162 [2024-07-26 01:16:44.468500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-26 01:16:44.468530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.162 qpair failed and we were unable to recover it. 00:34:14.162 [2024-07-26 01:16:44.468672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-26 01:16:44.468699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.162 qpair failed and we were unable to recover it. 00:34:14.162 [2024-07-26 01:16:44.468803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-26 01:16:44.468831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.162 qpair failed and we were unable to recover it. 00:34:14.162 [2024-07-26 01:16:44.469000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-26 01:16:44.469028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.162 qpair failed and we were unable to recover it. 00:34:14.162 [2024-07-26 01:16:44.469164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-26 01:16:44.469192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.162 qpair failed and we were unable to recover it. 00:34:14.162 [2024-07-26 01:16:44.469297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-26 01:16:44.469333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.162 qpair failed and we were unable to recover it. 00:34:14.162 [2024-07-26 01:16:44.469450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-26 01:16:44.469487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.162 qpair failed and we were unable to recover it. 00:34:14.162 [2024-07-26 01:16:44.469648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-26 01:16:44.469674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.162 qpair failed and we were unable to recover it. 00:34:14.162 [2024-07-26 01:16:44.469804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-26 01:16:44.469830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.162 qpair failed and we were unable to recover it. 00:34:14.162 [2024-07-26 01:16:44.469944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-26 01:16:44.469985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.162 qpair failed and we were unable to recover it. 00:34:14.162 [2024-07-26 01:16:44.470165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-26 01:16:44.470194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.162 qpair failed and we were unable to recover it. 00:34:14.162 [2024-07-26 01:16:44.470328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-26 01:16:44.470355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.162 qpair failed and we were unable to recover it. 00:34:14.162 [2024-07-26 01:16:44.470477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-26 01:16:44.470506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.162 qpair failed and we were unable to recover it. 00:34:14.162 [2024-07-26 01:16:44.470619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-26 01:16:44.470646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.162 qpair failed and we were unable to recover it. 00:34:14.162 [2024-07-26 01:16:44.470745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-26 01:16:44.470772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.162 qpair failed and we were unable to recover it. 00:34:14.162 [2024-07-26 01:16:44.470910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-26 01:16:44.470944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.162 qpair failed and we were unable to recover it. 00:34:14.162 [2024-07-26 01:16:44.471082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-26 01:16:44.471112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.162 qpair failed and we were unable to recover it. 00:34:14.162 [2024-07-26 01:16:44.471242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-26 01:16:44.471269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.163 qpair failed and we were unable to recover it. 00:34:14.163 [2024-07-26 01:16:44.471409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-26 01:16:44.471436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.163 qpair failed and we were unable to recover it. 00:34:14.163 [2024-07-26 01:16:44.471556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-26 01:16:44.471583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.163 qpair failed and we were unable to recover it. 00:34:14.163 [2024-07-26 01:16:44.471719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-26 01:16:44.471746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.163 qpair failed and we were unable to recover it. 00:34:14.163 [2024-07-26 01:16:44.471842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-26 01:16:44.471869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.163 qpair failed and we were unable to recover it. 00:34:14.163 [2024-07-26 01:16:44.472012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-26 01:16:44.472039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.163 qpair failed and we were unable to recover it. 00:34:14.163 [2024-07-26 01:16:44.472203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-26 01:16:44.472248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.163 qpair failed and we were unable to recover it. 00:34:14.163 [2024-07-26 01:16:44.472371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-26 01:16:44.472400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.163 qpair failed and we were unable to recover it. 00:34:14.163 [2024-07-26 01:16:44.472518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-26 01:16:44.472545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.163 qpair failed and we were unable to recover it. 00:34:14.163 [2024-07-26 01:16:44.472656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-26 01:16:44.472682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.163 qpair failed and we were unable to recover it. 00:34:14.163 [2024-07-26 01:16:44.472896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-26 01:16:44.472926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.163 qpair failed and we were unable to recover it. 00:34:14.163 [2024-07-26 01:16:44.473051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-26 01:16:44.473085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.163 qpair failed and we were unable to recover it. 00:34:14.163 [2024-07-26 01:16:44.473192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-26 01:16:44.473218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.163 qpair failed and we were unable to recover it. 00:34:14.163 [2024-07-26 01:16:44.473324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-26 01:16:44.473348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.163 qpair failed and we were unable to recover it. 00:34:14.163 [2024-07-26 01:16:44.473544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-26 01:16:44.473585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.163 qpair failed and we were unable to recover it. 00:34:14.163 [2024-07-26 01:16:44.473703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-26 01:16:44.473732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.163 qpair failed and we were unable to recover it. 00:34:14.163 [2024-07-26 01:16:44.473904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-26 01:16:44.473942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.163 qpair failed and we were unable to recover it. 00:34:14.163 [2024-07-26 01:16:44.474077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-26 01:16:44.474109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.163 qpair failed and we were unable to recover it. 00:34:14.163 [2024-07-26 01:16:44.474246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-26 01:16:44.474274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.163 qpair failed and we were unable to recover it. 00:34:14.163 [2024-07-26 01:16:44.474411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-26 01:16:44.474438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.163 qpair failed and we were unable to recover it. 00:34:14.163 [2024-07-26 01:16:44.474604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-26 01:16:44.474631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.163 qpair failed and we were unable to recover it. 00:34:14.163 [2024-07-26 01:16:44.474764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-26 01:16:44.474792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.163 qpair failed and we were unable to recover it. 00:34:14.163 [2024-07-26 01:16:44.474954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-26 01:16:44.474980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.163 qpair failed and we were unable to recover it. 00:34:14.163 [2024-07-26 01:16:44.475116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-26 01:16:44.475144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.163 qpair failed and we were unable to recover it. 00:34:14.163 [2024-07-26 01:16:44.475251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-26 01:16:44.475278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.163 qpair failed and we were unable to recover it. 00:34:14.163 [2024-07-26 01:16:44.475415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-26 01:16:44.475443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.163 qpair failed and we were unable to recover it. 00:34:14.163 [2024-07-26 01:16:44.475574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-26 01:16:44.475601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.163 qpair failed and we were unable to recover it. 00:34:14.163 [2024-07-26 01:16:44.475738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-26 01:16:44.475765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.163 qpair failed and we were unable to recover it. 00:34:14.163 [2024-07-26 01:16:44.475898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-26 01:16:44.475925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.163 qpair failed and we were unable to recover it. 00:34:14.163 [2024-07-26 01:16:44.476063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-26 01:16:44.476092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.163 qpair failed and we were unable to recover it. 00:34:14.163 [2024-07-26 01:16:44.476228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-26 01:16:44.476254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.163 qpair failed and we were unable to recover it. 00:34:14.163 [2024-07-26 01:16:44.476363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-26 01:16:44.476389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.163 qpair failed and we were unable to recover it. 00:34:14.163 [2024-07-26 01:16:44.476550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-26 01:16:44.476576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.163 qpair failed and we were unable to recover it. 00:34:14.163 [2024-07-26 01:16:44.476718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-26 01:16:44.476750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.163 qpair failed and we were unable to recover it. 00:34:14.163 [2024-07-26 01:16:44.476897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-26 01:16:44.476926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.163 qpair failed and we were unable to recover it. 00:34:14.163 [2024-07-26 01:16:44.477036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-26 01:16:44.477071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.163 qpair failed and we were unable to recover it. 00:34:14.163 [2024-07-26 01:16:44.477218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-26 01:16:44.477245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.164 qpair failed and we were unable to recover it. 00:34:14.164 [2024-07-26 01:16:44.477384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.164 [2024-07-26 01:16:44.477411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.164 qpair failed and we were unable to recover it. 00:34:14.164 [2024-07-26 01:16:44.477550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.164 [2024-07-26 01:16:44.477578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.164 qpair failed and we were unable to recover it. 00:34:14.164 [2024-07-26 01:16:44.477722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.164 [2024-07-26 01:16:44.477750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.164 qpair failed and we were unable to recover it. 00:34:14.164 [2024-07-26 01:16:44.477888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.164 [2024-07-26 01:16:44.477923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.164 qpair failed and we were unable to recover it. 00:34:14.164 [2024-07-26 01:16:44.478033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.164 [2024-07-26 01:16:44.478066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.164 qpair failed and we were unable to recover it. 00:34:14.164 [2024-07-26 01:16:44.478227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.164 [2024-07-26 01:16:44.478254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.164 qpair failed and we were unable to recover it. 00:34:14.164 [2024-07-26 01:16:44.478393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.164 [2024-07-26 01:16:44.478420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.164 qpair failed and we were unable to recover it. 00:34:14.164 [2024-07-26 01:16:44.478593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.164 [2024-07-26 01:16:44.478621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.164 qpair failed and we were unable to recover it. 00:34:14.164 [2024-07-26 01:16:44.478754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.164 [2024-07-26 01:16:44.478782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.164 qpair failed and we were unable to recover it. 00:34:14.164 [2024-07-26 01:16:44.478927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.164 [2024-07-26 01:16:44.478969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.164 qpair failed and we were unable to recover it. 00:34:14.164 [2024-07-26 01:16:44.479137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.164 [2024-07-26 01:16:44.479164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.164 qpair failed and we were unable to recover it. 00:34:14.164 [2024-07-26 01:16:44.479275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.164 [2024-07-26 01:16:44.479302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.164 qpair failed and we were unable to recover it. 00:34:14.164 [2024-07-26 01:16:44.479436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.164 [2024-07-26 01:16:44.479474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.164 qpair failed and we were unable to recover it. 00:34:14.164 [2024-07-26 01:16:44.479612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.164 [2024-07-26 01:16:44.479642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.164 qpair failed and we were unable to recover it. 00:34:14.164 [2024-07-26 01:16:44.479781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.164 [2024-07-26 01:16:44.479809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.164 qpair failed and we were unable to recover it. 00:34:14.164 [2024-07-26 01:16:44.479970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.164 [2024-07-26 01:16:44.479997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.164 qpair failed and we were unable to recover it. 00:34:14.164 [2024-07-26 01:16:44.480124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.164 [2024-07-26 01:16:44.480164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.164 qpair failed and we were unable to recover it. 00:34:14.164 [2024-07-26 01:16:44.480303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.164 [2024-07-26 01:16:44.480341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.164 qpair failed and we were unable to recover it. 00:34:14.164 [2024-07-26 01:16:44.480482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.164 [2024-07-26 01:16:44.480512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.164 qpair failed and we were unable to recover it. 00:34:14.164 [2024-07-26 01:16:44.480640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.164 [2024-07-26 01:16:44.480667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.164 qpair failed and we were unable to recover it. 00:34:14.164 [2024-07-26 01:16:44.480803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.164 [2024-07-26 01:16:44.480830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.164 qpair failed and we were unable to recover it. 00:34:14.164 [2024-07-26 01:16:44.480985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.164 [2024-07-26 01:16:44.481012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.164 qpair failed and we were unable to recover it. 00:34:14.164 [2024-07-26 01:16:44.481165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.164 [2024-07-26 01:16:44.481194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.164 qpair failed and we were unable to recover it. 00:34:14.164 [2024-07-26 01:16:44.481321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.164 [2024-07-26 01:16:44.481360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.164 qpair failed and we were unable to recover it. 00:34:14.164 [2024-07-26 01:16:44.481501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.164 [2024-07-26 01:16:44.481528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.164 qpair failed and we were unable to recover it. 00:34:14.164 [2024-07-26 01:16:44.481663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.164 [2024-07-26 01:16:44.481690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.164 qpair failed and we were unable to recover it. 00:34:14.164 [2024-07-26 01:16:44.481826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.164 [2024-07-26 01:16:44.481852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.164 qpair failed and we were unable to recover it. 00:34:14.164 [2024-07-26 01:16:44.482081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.164 [2024-07-26 01:16:44.482120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.164 qpair failed and we were unable to recover it. 00:34:14.164 [2024-07-26 01:16:44.482217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.164 [2024-07-26 01:16:44.482243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.164 qpair failed and we were unable to recover it. 00:34:14.164 [2024-07-26 01:16:44.482355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.164 [2024-07-26 01:16:44.482382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.164 qpair failed and we were unable to recover it. 00:34:14.164 [2024-07-26 01:16:44.482559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.164 [2024-07-26 01:16:44.482586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.164 qpair failed and we were unable to recover it. 00:34:14.164 [2024-07-26 01:16:44.482745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.164 [2024-07-26 01:16:44.482772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.164 qpair failed and we were unable to recover it. 00:34:14.164 [2024-07-26 01:16:44.482898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.164 [2024-07-26 01:16:44.482933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.164 qpair failed and we were unable to recover it. 00:34:14.164 [2024-07-26 01:16:44.483075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.164 [2024-07-26 01:16:44.483102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.164 qpair failed and we were unable to recover it. 00:34:14.164 [2024-07-26 01:16:44.483233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.164 [2024-07-26 01:16:44.483259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.164 qpair failed and we were unable to recover it. 00:34:14.164 [2024-07-26 01:16:44.483395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.164 [2024-07-26 01:16:44.483432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.164 qpair failed and we were unable to recover it. 00:34:14.164 [2024-07-26 01:16:44.483563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.483595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.483706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.483731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.483861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.483896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.484006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.484032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.484177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.484204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.484371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.484407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.484546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.484574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.484708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.484737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.484854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.484888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.484994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.485022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.485166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.485206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.485334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.485361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.485498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.485527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.485636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.485664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.485808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.485837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.486009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.486036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.486195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.486225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.486369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.486397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.486540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.486568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.486700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.486728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.486886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.486914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.487056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.487088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.487196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.487224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.487357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.487384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.487523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.487552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.487663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.487692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.487838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.487874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.488012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.488044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.488286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.488325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.488498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.488526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.488661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.488687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.488789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.488815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.488967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.488993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.489162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.489189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.489300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.489327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.489428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.489455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.489565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.489591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.489703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.489730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.489869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.489896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.490107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.490134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.490269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.490295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.490408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.490434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.490548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.490575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.490711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.490737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.490865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.490899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.491035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.491068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.491208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.491234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.491333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.491359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.491493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.491519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.491660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.491691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.491832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.491859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.491997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.492024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.492176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.492204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.492354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.492395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.492524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.492558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.492732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.492759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.492903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.165 [2024-07-26 01:16:44.492936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.165 qpair failed and we were unable to recover it. 00:34:14.165 [2024-07-26 01:16:44.493077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.493110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.493221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.493248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.493398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.493426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.493570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.493597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.493734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.493761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.493868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.493908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.494037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.494069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.494177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.494205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.494347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.494375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.494537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.494564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.494671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.494700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.494845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.494873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.495032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.495066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.495198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.495225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.495393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.495420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.495572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.495599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.495735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.495762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.495908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.495959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.496183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.496211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.496360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.496386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.496548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.496575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.496688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.496715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.496848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.496874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.496986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.497013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.497145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.497177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.497319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.497349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.497487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.497514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.497654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.497681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.497818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.497846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.498017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.498046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.498193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.498221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.498373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.498403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.498519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.498546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.498682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.498709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.498873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.498901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.499036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.499067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.499201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.499227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.499347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.499372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.499491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.499518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.499648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.499674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.499806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.499835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.499975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.500002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.500117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.500144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.500277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.500304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.500409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.500435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.500545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.500572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.500713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.500740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.500868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.500902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.501038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.501072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.501211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.501238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.501402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.501429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.501594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.501621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.501755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.501781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.501922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.501948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.502086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.166 [2024-07-26 01:16:44.502116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.166 qpair failed and we were unable to recover it. 00:34:14.166 [2024-07-26 01:16:44.502256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.502284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.502453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.502480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.502613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.502639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.502800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.502826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.502941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.502967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.503102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.503129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.503231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.503257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.503365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.503397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.503514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.503541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.503682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.503711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.503909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.503945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.504077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.504109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.504239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.504266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.504380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.504407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.504544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.504571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.504680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.504708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.504880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.504921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.505029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.505064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.505180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.505207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.505367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.505395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.505535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.505562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.505698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.505725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.505891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.505917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.506063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.506103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.506234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.506261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.506373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.506411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.506523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.506552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.506750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.506777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.506934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.506963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.507104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.507131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.507271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.507298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.507426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.507452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.507573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.507601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.507737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.507764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.507869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.507897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.508016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.508043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.508199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.508243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.508401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.508429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.508564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.508591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.508696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.508723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.508859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.508887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.509048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.509080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.509194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.509222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.509397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.509425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.509567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.509596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.167 [2024-07-26 01:16:44.509737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.167 [2024-07-26 01:16:44.509764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.167 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.509899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.509927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.510069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.510105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.510241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.510268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.510449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.510485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.510603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.510631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.510793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.510820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.510970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.510997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.511136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.511164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.511275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.511303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.511421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.511450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.511593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.511621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.511750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.511777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.511906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.511933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.512073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.512110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.512245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.512273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.512402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.512430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.512594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.512622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.512756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.512783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.512927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.512954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.513119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.513147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.513271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.513300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.513440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.513467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.513605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.513633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.513793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.513826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.513967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.513995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.514137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.514166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.514283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.514312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.514449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.514478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.514625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.514652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.514817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.514844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.514970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.515015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.515160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.515188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.515298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.515325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.515443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.515470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.515569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.515595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.515731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.515758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.515865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.515890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.516007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.516033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.516166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.516192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.516298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.516324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.516435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.516461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.516568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.516595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.516699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.516725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.516842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.516876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.517015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.517041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.517195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.517222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.517330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.517356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.517462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.517489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.517599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.517625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.168 qpair failed and we were unable to recover it. 00:34:14.168 [2024-07-26 01:16:44.517763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.168 [2024-07-26 01:16:44.517790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.452 qpair failed and we were unable to recover it. 00:34:14.452 [2024-07-26 01:16:44.517923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.452 [2024-07-26 01:16:44.517949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.452 qpair failed and we were unable to recover it. 00:34:14.452 [2024-07-26 01:16:44.518099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.452 [2024-07-26 01:16:44.518126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.452 qpair failed and we were unable to recover it. 00:34:14.452 [2024-07-26 01:16:44.518237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.452 [2024-07-26 01:16:44.518264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.452 qpair failed and we were unable to recover it. 00:34:14.452 [2024-07-26 01:16:44.518392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.452 [2024-07-26 01:16:44.518423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.452 qpair failed and we were unable to recover it. 00:34:14.452 [2024-07-26 01:16:44.518546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.452 [2024-07-26 01:16:44.518573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.452 qpair failed and we were unable to recover it. 00:34:14.452 [2024-07-26 01:16:44.518677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.452 [2024-07-26 01:16:44.518703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.452 qpair failed and we were unable to recover it. 00:34:14.452 [2024-07-26 01:16:44.518812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.452 [2024-07-26 01:16:44.518848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.452 qpair failed and we were unable to recover it. 00:34:14.452 [2024-07-26 01:16:44.518958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.452 [2024-07-26 01:16:44.518988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.452 qpair failed and we were unable to recover it. 00:34:14.452 [2024-07-26 01:16:44.519097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.452 [2024-07-26 01:16:44.519124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.452 qpair failed and we were unable to recover it. 00:34:14.452 [2024-07-26 01:16:44.519248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.452 [2024-07-26 01:16:44.519275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.452 qpair failed and we were unable to recover it. 00:34:14.452 [2024-07-26 01:16:44.519449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.452 [2024-07-26 01:16:44.519487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.452 qpair failed and we were unable to recover it. 00:34:14.452 [2024-07-26 01:16:44.519601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.452 [2024-07-26 01:16:44.519630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.452 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.519770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.519797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.519926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.519957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.520098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.520125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.520234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.520261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.520374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.520414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.520537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.520566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.520690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.520717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.520826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.520853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.520966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.520992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.521107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.521133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.521245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.521271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.521379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.521415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.521526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.521552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.521665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.521697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.521834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.521871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.522007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.522034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.522158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.522186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.522355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.522388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.522517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.522547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.522702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.522729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.522857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.522883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.523028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.523054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.523174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.523205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.523351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.523377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.523478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.523510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.523622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.523650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.523752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.523778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.523882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.523908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.524038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.524070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.524198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.524225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.524325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.524351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.524511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.524538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.524649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.524676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.524806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.524833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.524955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.524984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.525147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.525175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.525352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.525389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.525506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.525536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.525699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.525726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.525850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.525880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.526031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.526065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.526211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.526237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.526350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.526376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.526489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.526515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.526623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.526649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.526777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.526802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.526917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.526952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.527085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.527121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.527228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.527253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.527401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.527431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.527566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.527592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.527760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.527786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.527892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.527918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.528020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.528046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.528202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.528228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.528351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.528383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.453 qpair failed and we were unable to recover it. 00:34:14.453 [2024-07-26 01:16:44.528552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.453 [2024-07-26 01:16:44.528580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.528721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.528748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.528879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.528906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.529043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.529079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.529246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.529273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.529431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.529458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.529598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.529626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.529769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.529803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.529910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.529939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.530050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.530092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.530304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.530331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.530460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.530486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.530598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.530624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.530727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.530754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.530861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.530887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.531013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.531039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.531179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.531205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.531344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.531370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.531581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.531607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.531749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.531775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.531898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.531924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.532088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.532115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.532251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.532277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.532416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.532445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.532582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.532608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.532741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.532767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.532872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.532899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.533002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.533028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.533153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.533182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.533325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.533352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.533511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.533538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.533644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.533671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.533818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.533846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.533981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.534007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.534119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.534145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.534278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.534304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.534451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.534477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.534587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.534613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.534729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.534755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.534863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.534889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.534999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.535025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.535165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.535192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.535309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.535336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.535498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.535525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.535645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.535671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.535827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.535853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.535958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.535984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.536130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.536160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.536278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.536306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.536442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.536469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.536572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.536599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.536733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.536760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.536872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.536899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.537066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.537094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.454 [2024-07-26 01:16:44.537229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.454 [2024-07-26 01:16:44.537256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.454 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.537366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.537394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.537530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.537558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.537718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.537744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.537850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.537878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.537998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.538037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.538186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.538214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.538331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.538358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.538472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.538498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.538661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.538687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.538798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.538824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.538932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.538963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.539108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.539136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.539242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.539268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.539404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.539430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.539541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.539568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.539726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.539753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.539858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.539886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.540020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.540047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.540171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.540197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.540299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.540333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.540493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.540519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.540647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.540673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.540785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.540814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.540953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.540980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.541116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.541144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.541286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.541314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.541450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.541478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.541617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.541644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.541753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.541780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.541941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.541967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.542098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.542125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.542282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.542309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.542447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.542474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.542585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.542612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.542715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.542741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.542876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.542902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.543008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.543034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.543152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.543179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.543284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.543310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.543442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.543468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.543574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.543600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.543731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.543757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.543893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.543919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.544025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.544056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.544232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.544260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.544399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.544427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.544590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.544622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.544786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.544813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.544951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.544978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.545149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.545176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.545319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.455 [2024-07-26 01:16:44.545346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.455 qpair failed and we were unable to recover it. 00:34:14.455 [2024-07-26 01:16:44.545449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.545475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.545580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.545606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.545766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.545792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.545905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.545933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.546089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.546116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.546249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.546275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.546431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.546457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.546586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.546612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.546723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.546749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.546865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.546893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.547053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.547084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.547198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.547225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.547381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.547407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.547512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.547538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.547641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.547667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.547804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.547830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.547958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.547984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.548126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.548153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.548259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.548286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.548409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.548435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.548562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.548588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.548699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.548725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.548863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.548889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.549029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.549055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.549168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.549196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.549360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.549386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.549494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.549520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.549624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.549650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.549808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.549835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.549994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.550021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.550162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.550189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.550300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.550326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.550463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.550489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.550633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.550662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.550836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.550862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.550995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.551021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.551155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.551185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.551288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.551314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.551475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.551502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.551636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.551662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.551777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.551817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.551964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.551992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.552134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.552161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.552268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.552295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.552466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.552494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.552608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.552635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.552808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.552835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.552973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.552999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.553108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.553134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.553270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.553296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.553410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.553436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.553595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.553622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.553736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.553765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.553906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.553934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.554080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.554110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.554247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.554275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.554438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.554466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.554602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.554629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.554765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.456 [2024-07-26 01:16:44.554793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.456 qpair failed and we were unable to recover it. 00:34:14.456 [2024-07-26 01:16:44.554930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.554957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.555070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.555096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.555267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.555293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.555460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.555486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.555590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.555621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.555755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.555781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.555907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.555933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.556043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.556075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.556208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.556234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.556340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.556367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.556496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.556523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.556657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.556683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.556818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.556844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.556953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.556979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.557104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.557137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.557286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.557315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.557429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.557456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.557562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.557588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.557734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.557761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.557870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.557899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.558030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.558056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.558199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.558226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.558334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.558360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.558470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.558497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.558610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.558636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.558751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.558777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.558909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.558936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.559054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.559087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.559196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.559222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.559326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.559352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.559459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.559485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.559618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.559649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.559783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.559809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.559909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.559935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.560098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.560125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.560258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.560284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.560389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.560415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.560521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.560548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.560709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.560735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.560866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.560892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.560990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.561016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.561183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.561210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.561316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.561342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.561471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.561497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.561612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.561638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.561803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.561829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.561973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.561999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.562146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.562173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.562283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.562310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.562436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.562463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.562556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.562583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.562683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.562709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.562811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.562837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.562955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.562996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.563139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.563168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.563284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.563312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.563444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.563472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.457 [2024-07-26 01:16:44.563609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.457 [2024-07-26 01:16:44.563636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.457 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.563802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.563836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.563994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.564021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.564185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.564212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.564312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.564339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.564469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.564496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.564632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.564658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.564766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.564792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.564925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.564954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.565113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.565141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.565301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.565329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.565439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.565467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.565602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.565629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.565788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.565815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.565944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.565973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.566103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.566144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.566260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.566289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.566423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.566450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.566590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.566617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.566718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.566744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.566918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.566958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.567081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.567112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.567252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.567280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.567419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.567446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.567583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.567610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.567746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.567774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.567926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.567955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.568088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.568127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.568274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.568308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.568456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.568483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.568621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.568647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.568782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.568809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.568923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.568949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.569083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.569110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.569240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.569266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.569395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.569421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.569564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.569591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.569755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.569782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.569919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.569945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.570075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.570102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.570215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.570242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.570371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.570397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.570531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.570557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.570691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.570717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.570828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.570856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.570996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.571023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.571166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.571193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.571299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.571325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.571453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.571479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.571616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.571642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.571803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.571829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.571944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.571970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.572096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.458 [2024-07-26 01:16:44.572122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.458 qpair failed and we were unable to recover it. 00:34:14.458 [2024-07-26 01:16:44.572256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.572282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.572419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.572445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.572555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.572586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.572695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.572720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.572856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.572883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.572995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.573021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.573193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.573220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.573351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.573377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.573485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.573510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.573649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.573675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.573778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.573804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.573908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.573935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.574088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.574128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.574282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.574322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.574465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.574493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.574629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.574657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.574768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.574795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.574927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.574953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.575072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.575100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.575249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.575276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.575400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.575426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.575559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.575586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.575721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.575747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.575876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.575902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.576077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.576105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.576215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.576243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.576353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.576379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.576487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.576513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.576620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.576646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.576776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.576802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.576941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.576968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.577074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.577101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.577211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.577237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.577368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.577395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.577531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.577557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.577690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.577716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.577831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.577861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.577972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.578002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.578138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.578166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.578309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.578335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.578471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.578497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.578600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.578627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.578757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.578784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.578932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.578974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.579133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.579162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.579297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.579324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.579443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.579471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.579583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.579610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.579750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.579777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.579918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.579945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.580108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.580136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.580248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.580276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.580386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.580413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.580516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.580545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.580652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.580682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.580801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.580828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.580994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.581026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.581192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.581220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.459 [2024-07-26 01:16:44.581433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.459 [2024-07-26 01:16:44.581461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.459 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.581601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.581631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.581771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.581798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.581932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.581960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.582073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.582101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.582206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.582233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.582345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.582373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.582479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.582507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.582645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.582673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.582829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.582856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.583016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.583043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.583166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.583193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.583344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.583384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.583500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.583529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.583664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.583691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.583803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.583832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.583944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.583971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.584137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.584163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.584278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.584304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.584471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.584497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.584629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.584655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.584792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.584830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.584980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.585007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.585107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.585133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.585241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.585268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.585404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.585436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.585564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.585590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.585727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.585754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.585883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.585909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.586043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.586081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.586190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.586221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.586321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.586347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.586489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.586516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.586648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.586674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.586802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.586828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.586943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.586970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.587106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.587136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.587297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.587330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.587476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.587504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.587660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.587687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.587823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.587850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.587962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.587989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.588124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.588151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.588255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.588281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.588388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.588412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.588515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.588540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.588667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.588693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.588799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.588826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.588930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.588957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.589097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.589124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.589239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.589278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.589421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.589449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.589589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.589622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.589757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.589784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.589889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.589915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.590048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.590082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.590221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.590249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.590392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.590423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.590550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.590579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.590686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.590714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.460 qpair failed and we were unable to recover it. 00:34:14.460 [2024-07-26 01:16:44.590849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.460 [2024-07-26 01:16:44.590888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.591026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.591053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.591205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.591232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.591342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.591370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.591533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.591570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.591707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.591734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.591879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.591907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.592065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.592103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.592231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.592259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.592370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.592398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.592517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.592553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.592688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.592715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.592939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.592967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.593107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.593135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.593298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.593325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.593464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.593491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.593648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.593674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.593846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.593873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.594036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.594071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.594240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.594280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.594427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.594456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.594595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.594623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.594761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.594789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.594928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.594959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.595072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.595105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.595238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.595265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.595386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.595413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.595547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.595574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.595687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.595718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.595870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.595910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.596032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.596066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.596189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.596216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.596345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.596378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.596545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.596571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.596669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.596694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.596798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.596824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.596970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.596996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.597143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.597173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.597279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.597306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.597454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.597482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.597595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.597632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.597755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.597782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.597922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.597949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.598096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.598123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.598259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.598288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.598431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.598458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.598626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.598653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.598796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.598823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.598981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.599008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.599159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.461 [2024-07-26 01:16:44.599187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.461 qpair failed and we were unable to recover it. 00:34:14.461 [2024-07-26 01:16:44.599327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.599354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.599489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.599516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.599677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.599703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.599801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.599828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.599964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.599991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.600113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.600141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.600304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.600333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.600464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.600491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.600650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.600677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.600811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.600842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.600978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.601008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.601146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.601174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.601287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.601324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.601504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.601531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.601674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.601701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.601833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.601860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.602019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.602046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.602166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.602193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.602323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.602361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.602495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.602523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.602702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.602729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.602835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.602862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.603023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.603050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.603169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.603196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.603336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.603365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.603508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.603535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.603670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.603697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.603823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.603848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.603963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.603988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.604123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.604150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.604262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.604288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.604454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.604480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.604624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.604651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.604760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.604788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.604914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.604941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.605071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.605109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.605264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.605292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.605454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.605481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.605584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.605610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.605718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.605745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.605877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.605903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.606044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.606075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.606198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.606225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.606328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.606352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.606511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.606537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.606653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.606682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.606817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.606844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.606963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.606990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.607135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.607164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.607298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.607325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.607489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.607516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.607654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.607681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.607789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.607816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.607930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.607957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.608096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.462 [2024-07-26 01:16:44.608124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.462 qpair failed and we were unable to recover it. 00:34:14.462 [2024-07-26 01:16:44.608254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.608280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.608379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.608404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.608533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.608559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.608692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.608718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.608824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.608851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.609028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.609056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.609230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.609256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.609402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.609428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.609577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.609604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.609766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.609792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.609911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.609948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.610084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.610116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.610229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.610255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.610373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.610402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.610525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.610552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.610658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.610684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.610810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.610836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.610983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.611009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.611117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.611143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.611253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.611279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.611424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.611450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.611587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.611613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.611783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.611824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.611954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.611995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.612175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.612205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.612320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.612347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.612488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.612515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.612630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.612658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.612821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.612848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.612953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.612978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.613096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.613122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.613240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.613266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.613404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.613430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.613572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.613599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.613731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.613757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.613893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.613923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.614057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.614104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.614244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.614270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.614409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.614435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.614544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.614570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.614702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.614728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.614882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.614908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.615041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.615073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.615176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.615205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.615367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.615405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.615542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.615571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.615716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.615743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.615905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.615932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.616044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.616076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.616227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.616254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.616369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.616396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.616503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.616531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.616674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.616702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.616844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.616883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.617015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.617041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.617187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.617213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.617349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.617376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.463 [2024-07-26 01:16:44.617510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.463 [2024-07-26 01:16:44.617536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.463 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.617670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.617697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.617800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.617836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.617947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.617973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.618138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.618167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.618280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.618312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.618455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.618482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.618584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.618610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.618743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.618769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.618921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.618949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.619097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.619124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.619247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.619288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.619430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.619459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.619599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.619627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.619738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.619765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.619902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.619929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.620036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.620070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.620188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.620215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.620325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.620352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.620505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.620532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.620706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.620733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.620875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.620902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.621011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.621038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.621191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.621217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.621322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.621348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.621484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.621510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.621642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.621668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.621804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.621841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.621971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.621997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.622099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.622124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.622283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.622309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.622441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.622468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.622572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.622598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.622697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.622723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.622870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.622900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.623035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.623068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.623203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.623231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.623363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.623390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.623504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.623530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.623642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.623670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.623840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.623867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.623987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.624015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.624180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.624208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.624313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.624347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.624458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.624485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.624609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.624636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.624810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.624836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.624944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.624970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.625077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.625111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.625214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.625240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.625380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.625406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.625538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.625564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.625667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.625694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.625803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.625827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.625933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.625959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.626097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.626124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.626230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.626257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.626391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.464 [2024-07-26 01:16:44.626417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.464 qpair failed and we were unable to recover it. 00:34:14.464 [2024-07-26 01:16:44.626520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.626547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.626657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.626687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.626862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.626888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.627007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.627033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.627153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.627180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.627325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.627351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.627508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.627534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.627659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.627686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.627793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.627817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.627948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.627975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.628115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.628143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.628277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.628303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.628412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.628439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.628572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.628598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.628763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.628789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.628894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.628920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.629077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.629104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.629218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.629244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.629351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.629378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.629538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.629564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.629670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.629697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.629832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.629859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.630014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.630040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.630155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.630182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.630287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.630313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.630443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.630469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.630600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.630626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.630752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.630778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.630917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.630947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.631063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.631090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.631192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.631218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.631352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.631378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.631506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.631532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.631638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.631664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.631794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.631820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.631955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.631981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.632109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.632135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.632256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.632282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.632431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.632457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.632615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.632641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.632784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.632811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.632946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.632971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.633101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.633128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.633240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.633266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.633391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.633417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.633522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.633548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.633683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.633709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.633815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.633841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.633955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.633982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.634141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.634168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.634313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.634340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.634444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.465 [2024-07-26 01:16:44.634471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.465 qpair failed and we were unable to recover it. 00:34:14.465 [2024-07-26 01:16:44.634636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.634662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.634788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.634814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.634979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.635006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.635117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.635144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.635256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.635283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.635418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.635445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.635559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.635585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.635725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.635752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.635896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.635923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.636033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.636064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.636172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.636199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.636295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.636321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.636460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.636486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.636618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.636644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.636779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.636805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.636918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.636945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.637069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.637095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.637227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.637257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.637393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.637419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.637552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.637578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.637689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.637715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.637853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.637879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.638013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.638039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.638192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.638219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.638359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.638385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.638519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.638545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.638680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.638706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.638815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.638841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.638972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.638997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.639126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.639153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.639289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.639315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.639454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.639484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.639596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.639622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.639725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.639751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.639878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.639904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.640050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.640088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.640201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.640227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.640384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.640410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.640555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.640581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.640739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.640765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.640899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.640926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.641066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.641092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.641205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.641230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.641341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.641367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.641463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.641493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.641650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.641676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.641836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.641862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.641998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.642024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.642134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.642161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.642295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.642329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.642459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.642485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.642615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.642641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.642751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.642777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.642880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.642906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.643017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.643043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.643192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.643219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.643351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.643377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.643514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.643541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.643707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.643733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.466 qpair failed and we were unable to recover it. 00:34:14.466 [2024-07-26 01:16:44.643837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.466 [2024-07-26 01:16:44.643863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.643977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.644003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.644111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.644138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.644273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.644299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.644409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.644436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.644579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.644605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.644718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.644744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.644846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.644872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.645004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.645030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.645139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.645166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.645300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.645326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.645465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.645491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.645628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.645655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.645801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.645827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.645937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.645963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.646099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.646125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.646250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.646276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.646388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.646414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.646531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.646557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.646709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.646735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.646947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.646973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.647114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.647141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.647275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.647301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.647434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.647460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.647619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.647645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.647777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.647803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.647962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.647994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.648170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.648197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.648360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.648386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.648495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.648522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.648633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.648660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.648770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.648796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.648953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.648979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.649113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.649139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.649248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.649274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.649406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.649432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.649562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.649588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.649715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.649741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.649901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.649928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.650086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.650112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.650260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.650286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.650389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.650414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.650547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.650573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.650688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.650714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.650825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.650851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.650995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.651021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.651155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.651182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.651392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.651418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.651525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.651553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.651701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.651727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.651884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.651910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.652023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.652050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.652205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.652231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.652365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.652391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.652508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.652535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.652644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.652670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.652816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.467 [2024-07-26 01:16:44.652841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.467 qpair failed and we were unable to recover it. 00:34:14.467 [2024-07-26 01:16:44.652959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.652985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.653092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.653119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.653252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.653278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.653440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.653466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.653598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.653623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.653762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.653788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.653946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.653972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.654117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.654143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.654275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.654302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.654514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.654540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.654682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.654708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.654842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.654868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.655003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.655029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.655172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.655198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.655334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.655360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.655495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.655521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.655670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.655696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.655837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.655865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.656077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.656104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.656238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.656264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.656397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.656423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.656529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.656555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.656722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.656748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.656882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.656908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.657125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.657152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.657326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.657353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.657458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.657484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.657618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.657644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.657751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.657777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.657907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.657933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.658070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.658097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.658230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.658256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.658388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.658419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.658533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.658560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.658662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.658688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.658824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.658850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.658982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.659009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.659169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.659200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.659306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.659332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.659478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.659505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.659660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.659686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.659813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.659841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.659978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.660004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.660137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.660164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.660297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.660323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.660459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.660485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.660588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.660614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.660751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.660777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.660920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.660946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.661051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.661084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.661199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.661226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.661271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd7e620 (9): Bad file descriptor 00:34:14.468 [2024-07-26 01:16:44.661455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.661496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.661612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.468 [2024-07-26 01:16:44.661640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.468 qpair failed and we were unable to recover it. 00:34:14.468 [2024-07-26 01:16:44.661802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.661832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.661969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.661996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.662102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.662129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.662298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.662325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.662495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.662523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.662633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.662659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.662818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.662844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.662998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.663025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.663152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.663178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.663304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.663330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.663477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.663503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.663603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.663630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.663761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.663788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.663925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.663951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.664057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.664088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.664190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.664216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.664362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.664397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.664547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.664573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.664701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.664727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.664857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.664894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.665010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.665036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.665157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.665183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.665320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.665345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.665456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.665482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.665627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.665653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.665868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.665905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.666035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.666066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.666206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.666233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.666339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.666365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.666526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.666564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.666699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.666725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.666875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.666901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.667057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.667110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.667248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.667275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.667413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.667439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.667576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.667602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.667740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.667765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.667896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.667923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.668049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.668108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.668223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.668251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.668402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.668429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.668566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.668593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.668742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.668770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.668890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.668921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.669036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.669070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.669193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.669219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.669350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.669376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.669511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.669537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.669640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.669677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.669788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.669814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.669951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.669977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.670123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.670150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.670265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.670291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.670402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.670438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.670575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.670601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.670761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.670787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.670924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.469 [2024-07-26 01:16:44.670950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.469 qpair failed and we were unable to recover it. 00:34:14.469 [2024-07-26 01:16:44.671081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.470 [2024-07-26 01:16:44.671109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.470 qpair failed and we were unable to recover it. 00:34:14.470 [2024-07-26 01:16:44.671219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.470 [2024-07-26 01:16:44.671245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.470 qpair failed and we were unable to recover it. 00:34:14.470 [2024-07-26 01:16:44.671382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.470 [2024-07-26 01:16:44.671407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.470 qpair failed and we were unable to recover it. 00:34:14.470 [2024-07-26 01:16:44.671567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.470 [2024-07-26 01:16:44.671603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.470 qpair failed and we were unable to recover it. 00:34:14.470 [2024-07-26 01:16:44.671745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.470 [2024-07-26 01:16:44.671770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.470 qpair failed and we were unable to recover it. 00:34:14.470 [2024-07-26 01:16:44.671937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.470 [2024-07-26 01:16:44.671967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.470 qpair failed and we were unable to recover it. 00:34:14.470 [2024-07-26 01:16:44.672124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.470 [2024-07-26 01:16:44.672151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.470 qpair failed and we were unable to recover it. 00:34:14.470 [2024-07-26 01:16:44.672278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.470 [2024-07-26 01:16:44.672319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.470 qpair failed and we were unable to recover it. 00:34:14.470 [2024-07-26 01:16:44.672495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.470 [2024-07-26 01:16:44.672529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.470 qpair failed and we were unable to recover it. 00:34:14.470 [2024-07-26 01:16:44.672657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.470 [2024-07-26 01:16:44.672685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.470 qpair failed and we were unable to recover it. 00:34:14.470 [2024-07-26 01:16:44.672792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.470 [2024-07-26 01:16:44.672819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.470 qpair failed and we were unable to recover it. 00:34:14.470 [2024-07-26 01:16:44.672947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.470 [2024-07-26 01:16:44.672974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.470 qpair failed and we were unable to recover it. 00:34:14.470 [2024-07-26 01:16:44.673112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.470 [2024-07-26 01:16:44.673139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.470 qpair failed and we were unable to recover it. 00:34:14.470 [2024-07-26 01:16:44.673295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.470 [2024-07-26 01:16:44.673321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.470 qpair failed and we were unable to recover it. 00:34:14.470 [2024-07-26 01:16:44.673453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.470 [2024-07-26 01:16:44.673479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.470 qpair failed and we were unable to recover it. 00:34:14.470 [2024-07-26 01:16:44.673638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.470 [2024-07-26 01:16:44.673672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.470 qpair failed and we were unable to recover it. 00:34:14.470 [2024-07-26 01:16:44.673777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.470 [2024-07-26 01:16:44.673803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.470 qpair failed and we were unable to recover it. 00:34:14.470 [2024-07-26 01:16:44.673933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.470 [2024-07-26 01:16:44.673960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.470 qpair failed and we were unable to recover it. 00:34:14.470 [2024-07-26 01:16:44.674084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.470 [2024-07-26 01:16:44.674111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.470 qpair failed and we were unable to recover it. 00:34:14.470 [2024-07-26 01:16:44.674245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.470 [2024-07-26 01:16:44.674271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.470 qpair failed and we were unable to recover it. 00:34:14.470 [2024-07-26 01:16:44.674408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.470 [2024-07-26 01:16:44.674434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.470 qpair failed and we were unable to recover it. 00:34:14.470 [2024-07-26 01:16:44.674562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.470 [2024-07-26 01:16:44.674588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.470 qpair failed and we were unable to recover it. 00:34:14.470 [2024-07-26 01:16:44.674733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.470 [2024-07-26 01:16:44.674759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.470 qpair failed and we were unable to recover it. 00:34:14.470 [2024-07-26 01:16:44.674888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.470 [2024-07-26 01:16:44.674914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.470 qpair failed and we were unable to recover it. 00:34:14.470 [2024-07-26 01:16:44.675063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.470 [2024-07-26 01:16:44.675095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.470 qpair failed and we were unable to recover it. 00:34:14.470 [2024-07-26 01:16:44.675228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.470 [2024-07-26 01:16:44.675254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.470 qpair failed and we were unable to recover it. 00:34:14.470 [2024-07-26 01:16:44.675366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.470 [2024-07-26 01:16:44.675392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.470 qpair failed and we were unable to recover it. 00:34:14.470 [2024-07-26 01:16:44.675541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.471 [2024-07-26 01:16:44.675577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.471 qpair failed and we were unable to recover it. 00:34:14.471 [2024-07-26 01:16:44.675725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.471 [2024-07-26 01:16:44.675763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.471 qpair failed and we were unable to recover it. 00:34:14.471 [2024-07-26 01:16:44.675878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.471 [2024-07-26 01:16:44.675904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.471 qpair failed and we were unable to recover it. 00:34:14.471 [2024-07-26 01:16:44.676036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.471 [2024-07-26 01:16:44.676088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.471 qpair failed and we were unable to recover it. 00:34:14.471 [2024-07-26 01:16:44.676256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.471 [2024-07-26 01:16:44.676282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.471 qpair failed and we were unable to recover it. 00:34:14.471 [2024-07-26 01:16:44.676393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.471 [2024-07-26 01:16:44.676419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.471 qpair failed and we were unable to recover it. 00:34:14.471 [2024-07-26 01:16:44.676573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.471 [2024-07-26 01:16:44.676599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.471 qpair failed and we were unable to recover it. 00:34:14.471 [2024-07-26 01:16:44.676727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.471 [2024-07-26 01:16:44.676754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.471 qpair failed and we were unable to recover it. 00:34:14.471 [2024-07-26 01:16:44.676891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.471 [2024-07-26 01:16:44.676921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.471 qpair failed and we were unable to recover it. 00:34:14.471 [2024-07-26 01:16:44.677028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.471 [2024-07-26 01:16:44.677054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.471 qpair failed and we were unable to recover it. 00:34:14.471 [2024-07-26 01:16:44.677164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.471 [2024-07-26 01:16:44.677190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.471 qpair failed and we were unable to recover it. 00:34:14.471 [2024-07-26 01:16:44.677325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.471 [2024-07-26 01:16:44.677353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.471 qpair failed and we were unable to recover it. 00:34:14.471 [2024-07-26 01:16:44.677488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.471 [2024-07-26 01:16:44.677525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.471 qpair failed and we were unable to recover it. 00:34:14.471 [2024-07-26 01:16:44.677644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.471 [2024-07-26 01:16:44.677670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.471 qpair failed and we were unable to recover it. 00:34:14.471 [2024-07-26 01:16:44.677795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.471 [2024-07-26 01:16:44.677821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.471 qpair failed and we were unable to recover it. 00:34:14.471 [2024-07-26 01:16:44.677981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.471 [2024-07-26 01:16:44.678012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.472 qpair failed and we were unable to recover it. 00:34:14.472 [2024-07-26 01:16:44.678192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.472 [2024-07-26 01:16:44.678219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.472 qpair failed and we were unable to recover it. 00:34:14.472 [2024-07-26 01:16:44.678327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.472 [2024-07-26 01:16:44.678353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.472 qpair failed and we were unable to recover it. 00:34:14.472 [2024-07-26 01:16:44.678509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.472 [2024-07-26 01:16:44.678535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.472 qpair failed and we were unable to recover it. 00:34:14.472 [2024-07-26 01:16:44.678642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.472 [2024-07-26 01:16:44.678668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.472 qpair failed and we were unable to recover it. 00:34:14.472 [2024-07-26 01:16:44.678853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.472 [2024-07-26 01:16:44.678894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.472 qpair failed and we were unable to recover it. 00:34:14.472 [2024-07-26 01:16:44.679014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.472 [2024-07-26 01:16:44.679044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.472 qpair failed and we were unable to recover it. 00:34:14.472 [2024-07-26 01:16:44.679208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.472 [2024-07-26 01:16:44.679248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.472 qpair failed and we were unable to recover it. 00:34:14.472 [2024-07-26 01:16:44.679389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.472 [2024-07-26 01:16:44.679418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.472 qpair failed and we were unable to recover it. 00:34:14.472 [2024-07-26 01:16:44.679526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.472 [2024-07-26 01:16:44.679554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.472 qpair failed and we were unable to recover it. 00:34:14.472 [2024-07-26 01:16:44.679728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.472 [2024-07-26 01:16:44.679756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.472 qpair failed and we were unable to recover it. 00:34:14.472 [2024-07-26 01:16:44.679893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.472 [2024-07-26 01:16:44.679932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.472 qpair failed and we were unable to recover it. 00:34:14.472 [2024-07-26 01:16:44.680102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.472 [2024-07-26 01:16:44.680129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.472 qpair failed and we were unable to recover it. 00:34:14.472 [2024-07-26 01:16:44.680239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.472 [2024-07-26 01:16:44.680265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.472 qpair failed and we were unable to recover it. 00:34:14.472 [2024-07-26 01:16:44.680409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.472 [2024-07-26 01:16:44.680436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.472 qpair failed and we were unable to recover it. 00:34:14.472 [2024-07-26 01:16:44.680576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.472 [2024-07-26 01:16:44.680602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.472 qpair failed and we were unable to recover it. 00:34:14.472 [2024-07-26 01:16:44.680740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.472 [2024-07-26 01:16:44.680769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.473 qpair failed and we were unable to recover it. 00:34:14.473 [2024-07-26 01:16:44.680907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.473 [2024-07-26 01:16:44.680935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.473 qpair failed and we were unable to recover it. 00:34:14.473 [2024-07-26 01:16:44.681099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.473 [2024-07-26 01:16:44.681127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.473 qpair failed and we were unable to recover it. 00:34:14.473 [2024-07-26 01:16:44.681259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.473 [2024-07-26 01:16:44.681286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.473 qpair failed and we were unable to recover it. 00:34:14.473 [2024-07-26 01:16:44.681455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.473 [2024-07-26 01:16:44.681486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.473 qpair failed and we were unable to recover it. 00:34:14.473 [2024-07-26 01:16:44.681619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.473 [2024-07-26 01:16:44.681646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.473 qpair failed and we were unable to recover it. 00:34:14.473 [2024-07-26 01:16:44.681787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.473 [2024-07-26 01:16:44.681814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.473 qpair failed and we were unable to recover it. 00:34:14.474 [2024-07-26 01:16:44.681927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.474 [2024-07-26 01:16:44.681954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.474 qpair failed and we were unable to recover it. 00:34:14.474 [2024-07-26 01:16:44.682050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.474 [2024-07-26 01:16:44.682081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.474 qpair failed and we were unable to recover it. 00:34:14.474 [2024-07-26 01:16:44.682239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.474 [2024-07-26 01:16:44.682266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.474 qpair failed and we were unable to recover it. 00:34:14.474 [2024-07-26 01:16:44.682432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.474 [2024-07-26 01:16:44.682458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.474 qpair failed and we were unable to recover it. 00:34:14.474 [2024-07-26 01:16:44.682598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.474 [2024-07-26 01:16:44.682624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.474 qpair failed and we were unable to recover it. 00:34:14.474 [2024-07-26 01:16:44.682738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.474 [2024-07-26 01:16:44.682769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.474 qpair failed and we were unable to recover it. 00:34:14.474 [2024-07-26 01:16:44.682878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.474 [2024-07-26 01:16:44.682908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.474 qpair failed and we were unable to recover it. 00:34:14.474 [2024-07-26 01:16:44.683056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.474 [2024-07-26 01:16:44.683089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.474 qpair failed and we were unable to recover it. 00:34:14.474 [2024-07-26 01:16:44.683221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.474 [2024-07-26 01:16:44.683248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.474 qpair failed and we were unable to recover it. 00:34:14.474 [2024-07-26 01:16:44.683368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.474 [2024-07-26 01:16:44.683395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.474 qpair failed and we were unable to recover it. 00:34:14.474 [2024-07-26 01:16:44.683533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.474 [2024-07-26 01:16:44.683569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.474 qpair failed and we were unable to recover it. 00:34:14.474 [2024-07-26 01:16:44.683710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.474 [2024-07-26 01:16:44.683737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.474 qpair failed and we were unable to recover it. 00:34:14.474 [2024-07-26 01:16:44.683885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.474 [2024-07-26 01:16:44.683912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.474 qpair failed and we were unable to recover it. 00:34:14.474 [2024-07-26 01:16:44.684047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.474 [2024-07-26 01:16:44.684077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.474 qpair failed and we were unable to recover it. 00:34:14.475 [2024-07-26 01:16:44.684212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.475 [2024-07-26 01:16:44.684238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.475 qpair failed and we were unable to recover it. 00:34:14.475 [2024-07-26 01:16:44.684356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.475 [2024-07-26 01:16:44.684382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.475 qpair failed and we were unable to recover it. 00:34:14.475 [2024-07-26 01:16:44.684525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.475 [2024-07-26 01:16:44.684553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.475 qpair failed and we were unable to recover it. 00:34:14.475 [2024-07-26 01:16:44.684664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.475 [2024-07-26 01:16:44.684690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.475 qpair failed and we were unable to recover it. 00:34:14.475 [2024-07-26 01:16:44.684811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.475 [2024-07-26 01:16:44.684837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.475 qpair failed and we were unable to recover it. 00:34:14.475 [2024-07-26 01:16:44.684969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.475 [2024-07-26 01:16:44.684996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.475 qpair failed and we were unable to recover it. 00:34:14.475 [2024-07-26 01:16:44.685103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.475 [2024-07-26 01:16:44.685129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.475 qpair failed and we were unable to recover it. 00:34:14.475 [2024-07-26 01:16:44.685233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.475 [2024-07-26 01:16:44.685259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.475 qpair failed and we were unable to recover it. 00:34:14.475 [2024-07-26 01:16:44.685416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.476 [2024-07-26 01:16:44.685443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.476 qpair failed and we were unable to recover it. 00:34:14.476 [2024-07-26 01:16:44.685593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.476 [2024-07-26 01:16:44.685620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.476 qpair failed and we were unable to recover it. 00:34:14.476 [2024-07-26 01:16:44.685757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.476 [2024-07-26 01:16:44.685788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.476 qpair failed and we were unable to recover it. 00:34:14.476 [2024-07-26 01:16:44.685939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.476 [2024-07-26 01:16:44.685965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.476 qpair failed and we were unable to recover it. 00:34:14.476 [2024-07-26 01:16:44.686116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.476 [2024-07-26 01:16:44.686157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.476 qpair failed and we were unable to recover it. 00:34:14.476 [2024-07-26 01:16:44.686331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.476 [2024-07-26 01:16:44.686362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.476 qpair failed and we were unable to recover it. 00:34:14.476 [2024-07-26 01:16:44.686473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.476 [2024-07-26 01:16:44.686498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.476 qpair failed and we were unable to recover it. 00:34:14.476 [2024-07-26 01:16:44.686636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.476 [2024-07-26 01:16:44.686663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.476 qpair failed and we were unable to recover it. 00:34:14.476 [2024-07-26 01:16:44.686826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.476 [2024-07-26 01:16:44.686854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.476 qpair failed and we were unable to recover it. 00:34:14.476 [2024-07-26 01:16:44.686965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.476 [2024-07-26 01:16:44.686993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.476 qpair failed and we were unable to recover it. 00:34:14.477 [2024-07-26 01:16:44.687131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.477 [2024-07-26 01:16:44.687158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.477 qpair failed and we were unable to recover it. 00:34:14.477 [2024-07-26 01:16:44.687295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.477 [2024-07-26 01:16:44.687330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.477 qpair failed and we were unable to recover it. 00:34:14.477 [2024-07-26 01:16:44.687461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.477 [2024-07-26 01:16:44.687487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.477 qpair failed and we were unable to recover it. 00:34:14.477 [2024-07-26 01:16:44.687622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.477 [2024-07-26 01:16:44.687649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.477 qpair failed and we were unable to recover it. 00:34:14.477 [2024-07-26 01:16:44.687785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.477 [2024-07-26 01:16:44.687812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.477 qpair failed and we were unable to recover it. 00:34:14.477 [2024-07-26 01:16:44.687916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.477 [2024-07-26 01:16:44.687942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.477 qpair failed and we were unable to recover it. 00:34:14.477 [2024-07-26 01:16:44.688090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.477 [2024-07-26 01:16:44.688121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.477 qpair failed and we were unable to recover it. 00:34:14.477 [2024-07-26 01:16:44.688266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.477 [2024-07-26 01:16:44.688297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.477 qpair failed and we were unable to recover it. 00:34:14.477 [2024-07-26 01:16:44.688469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.477 [2024-07-26 01:16:44.688496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.477 qpair failed and we were unable to recover it. 00:34:14.478 [2024-07-26 01:16:44.688617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.478 [2024-07-26 01:16:44.688645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.478 qpair failed and we were unable to recover it. 00:34:14.478 [2024-07-26 01:16:44.688814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.478 [2024-07-26 01:16:44.688841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.478 qpair failed and we were unable to recover it. 00:34:14.478 [2024-07-26 01:16:44.688969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.478 [2024-07-26 01:16:44.688996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.478 qpair failed and we were unable to recover it. 00:34:14.478 [2024-07-26 01:16:44.689163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.478 [2024-07-26 01:16:44.689190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.478 qpair failed and we were unable to recover it. 00:34:14.478 [2024-07-26 01:16:44.689355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.478 [2024-07-26 01:16:44.689383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.478 qpair failed and we were unable to recover it. 00:34:14.478 [2024-07-26 01:16:44.689521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.478 [2024-07-26 01:16:44.689548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.478 qpair failed and we were unable to recover it. 00:34:14.478 [2024-07-26 01:16:44.689728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.478 [2024-07-26 01:16:44.689755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.478 qpair failed and we were unable to recover it. 00:34:14.478 [2024-07-26 01:16:44.689890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.478 [2024-07-26 01:16:44.689918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.478 qpair failed and we were unable to recover it. 00:34:14.478 [2024-07-26 01:16:44.690067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.478 [2024-07-26 01:16:44.690106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.478 qpair failed and we were unable to recover it. 00:34:14.478 [2024-07-26 01:16:44.690243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.478 [2024-07-26 01:16:44.690270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.478 qpair failed and we were unable to recover it. 00:34:14.478 [2024-07-26 01:16:44.690442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.478 [2024-07-26 01:16:44.690474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.478 qpair failed and we were unable to recover it. 00:34:14.478 [2024-07-26 01:16:44.690591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.478 [2024-07-26 01:16:44.690620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.478 qpair failed and we were unable to recover it. 00:34:14.478 [2024-07-26 01:16:44.690757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.478 [2024-07-26 01:16:44.690785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.478 qpair failed and we were unable to recover it. 00:34:14.478 [2024-07-26 01:16:44.690901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.478 [2024-07-26 01:16:44.690929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.478 qpair failed and we were unable to recover it. 00:34:14.478 [2024-07-26 01:16:44.691067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.478 [2024-07-26 01:16:44.691102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.478 qpair failed and we were unable to recover it. 00:34:14.478 [2024-07-26 01:16:44.691229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.478 [2024-07-26 01:16:44.691254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.478 qpair failed and we were unable to recover it. 00:34:14.478 [2024-07-26 01:16:44.691407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.478 [2024-07-26 01:16:44.691433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.478 qpair failed and we were unable to recover it. 00:34:14.478 [2024-07-26 01:16:44.691530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.478 [2024-07-26 01:16:44.691556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.478 qpair failed and we were unable to recover it. 00:34:14.478 [2024-07-26 01:16:44.691661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.478 [2024-07-26 01:16:44.691687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.478 qpair failed and we were unable to recover it. 00:34:14.478 [2024-07-26 01:16:44.691812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.478 [2024-07-26 01:16:44.691838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.478 qpair failed and we were unable to recover it. 00:34:14.479 [2024-07-26 01:16:44.691994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.479 [2024-07-26 01:16:44.692020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.479 qpair failed and we were unable to recover it. 00:34:14.479 [2024-07-26 01:16:44.692137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.479 [2024-07-26 01:16:44.692164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.479 qpair failed and we were unable to recover it. 00:34:14.479 [2024-07-26 01:16:44.692324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.479 [2024-07-26 01:16:44.692353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.479 qpair failed and we were unable to recover it. 00:34:14.479 [2024-07-26 01:16:44.692493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.479 [2024-07-26 01:16:44.692522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.479 qpair failed and we were unable to recover it. 00:34:14.479 [2024-07-26 01:16:44.692662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.479 [2024-07-26 01:16:44.692689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.479 qpair failed and we were unable to recover it. 00:34:14.479 [2024-07-26 01:16:44.692829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.479 [2024-07-26 01:16:44.692856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.479 qpair failed and we were unable to recover it. 00:34:14.479 [2024-07-26 01:16:44.693000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.479 [2024-07-26 01:16:44.693028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.479 qpair failed and we were unable to recover it. 00:34:14.479 [2024-07-26 01:16:44.693180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.479 [2024-07-26 01:16:44.693208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.479 qpair failed and we were unable to recover it. 00:34:14.479 [2024-07-26 01:16:44.693354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.479 [2024-07-26 01:16:44.693381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.479 qpair failed and we were unable to recover it. 00:34:14.479 [2024-07-26 01:16:44.693543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.479 [2024-07-26 01:16:44.693570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.479 qpair failed and we were unable to recover it. 00:34:14.479 [2024-07-26 01:16:44.693705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.479 [2024-07-26 01:16:44.693732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.479 qpair failed and we were unable to recover it. 00:34:14.479 [2024-07-26 01:16:44.693851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.479 [2024-07-26 01:16:44.693878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.479 qpair failed and we were unable to recover it. 00:34:14.479 [2024-07-26 01:16:44.693980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.479 [2024-07-26 01:16:44.694006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.479 qpair failed and we were unable to recover it. 00:34:14.479 [2024-07-26 01:16:44.694143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.479 [2024-07-26 01:16:44.694169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.479 qpair failed and we were unable to recover it. 00:34:14.479 [2024-07-26 01:16:44.694304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.479 [2024-07-26 01:16:44.694333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.479 qpair failed and we were unable to recover it. 00:34:14.479 [2024-07-26 01:16:44.694494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.479 [2024-07-26 01:16:44.694520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.479 qpair failed and we were unable to recover it. 00:34:14.479 [2024-07-26 01:16:44.694660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.479 [2024-07-26 01:16:44.694686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.479 qpair failed and we were unable to recover it. 00:34:14.479 [2024-07-26 01:16:44.694793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.479 [2024-07-26 01:16:44.694826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.479 qpair failed and we were unable to recover it. 00:34:14.479 [2024-07-26 01:16:44.694962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.479 [2024-07-26 01:16:44.694989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.479 qpair failed and we were unable to recover it. 00:34:14.479 [2024-07-26 01:16:44.695102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.479 [2024-07-26 01:16:44.695129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.479 qpair failed and we were unable to recover it. 00:34:14.479 [2024-07-26 01:16:44.695229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.480 [2024-07-26 01:16:44.695255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.480 qpair failed and we were unable to recover it. 00:34:14.480 [2024-07-26 01:16:44.695405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.480 [2024-07-26 01:16:44.695431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.480 qpair failed and we were unable to recover it. 00:34:14.480 [2024-07-26 01:16:44.695542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.480 [2024-07-26 01:16:44.695578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.480 qpair failed and we were unable to recover it. 00:34:14.480 [2024-07-26 01:16:44.695715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.480 [2024-07-26 01:16:44.695741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.480 qpair failed and we were unable to recover it. 00:34:14.480 [2024-07-26 01:16:44.695876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.480 [2024-07-26 01:16:44.695902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.480 qpair failed and we were unable to recover it. 00:34:14.480 [2024-07-26 01:16:44.696005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.480 [2024-07-26 01:16:44.696031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.480 qpair failed and we were unable to recover it. 00:34:14.480 [2024-07-26 01:16:44.696176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.480 [2024-07-26 01:16:44.696203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.480 qpair failed and we were unable to recover it. 00:34:14.480 [2024-07-26 01:16:44.696339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.480 [2024-07-26 01:16:44.696366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.480 qpair failed and we were unable to recover it. 00:34:14.480 [2024-07-26 01:16:44.696473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.480 [2024-07-26 01:16:44.696499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.480 qpair failed and we were unable to recover it. 00:34:14.480 [2024-07-26 01:16:44.696670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.481 [2024-07-26 01:16:44.696697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.481 qpair failed and we were unable to recover it. 00:34:14.481 [2024-07-26 01:16:44.696831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.481 [2024-07-26 01:16:44.696857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.481 qpair failed and we were unable to recover it. 00:34:14.481 [2024-07-26 01:16:44.696985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.481 [2024-07-26 01:16:44.697012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.481 qpair failed and we were unable to recover it. 00:34:14.481 [2024-07-26 01:16:44.697162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.481 [2024-07-26 01:16:44.697189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.481 qpair failed and we were unable to recover it. 00:34:14.481 [2024-07-26 01:16:44.697296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.481 [2024-07-26 01:16:44.697326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.481 qpair failed and we were unable to recover it. 00:34:14.481 [2024-07-26 01:16:44.697432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.481 [2024-07-26 01:16:44.697459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.481 qpair failed and we were unable to recover it. 00:34:14.481 [2024-07-26 01:16:44.697616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.481 [2024-07-26 01:16:44.697643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.481 qpair failed and we were unable to recover it. 00:34:14.481 [2024-07-26 01:16:44.697747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.481 [2024-07-26 01:16:44.697773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.481 qpair failed and we were unable to recover it. 00:34:14.481 [2024-07-26 01:16:44.697911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.481 [2024-07-26 01:16:44.697938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.481 qpair failed and we were unable to recover it. 00:34:14.481 [2024-07-26 01:16:44.698083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.481 [2024-07-26 01:16:44.698116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.481 qpair failed and we were unable to recover it. 00:34:14.481 [2024-07-26 01:16:44.698244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.481 [2024-07-26 01:16:44.698270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.481 qpair failed and we were unable to recover it. 00:34:14.481 [2024-07-26 01:16:44.698377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.482 [2024-07-26 01:16:44.698403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.482 qpair failed and we were unable to recover it. 00:34:14.482 [2024-07-26 01:16:44.698511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.482 [2024-07-26 01:16:44.698538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.482 qpair failed and we were unable to recover it. 00:34:14.482 [2024-07-26 01:16:44.698666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.482 [2024-07-26 01:16:44.698693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.482 qpair failed and we were unable to recover it. 00:34:14.482 [2024-07-26 01:16:44.698825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.482 [2024-07-26 01:16:44.698851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.482 qpair failed and we were unable to recover it. 00:34:14.482 [2024-07-26 01:16:44.698969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.482 [2024-07-26 01:16:44.699015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.482 qpair failed and we were unable to recover it. 00:34:14.482 [2024-07-26 01:16:44.699193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.482 [2024-07-26 01:16:44.699221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.482 qpair failed and we were unable to recover it. 00:34:14.482 [2024-07-26 01:16:44.699354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.482 [2024-07-26 01:16:44.699382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.482 qpair failed and we were unable to recover it. 00:34:14.482 [2024-07-26 01:16:44.699519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.482 [2024-07-26 01:16:44.699546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.482 qpair failed and we were unable to recover it. 00:34:14.482 [2024-07-26 01:16:44.699655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.482 [2024-07-26 01:16:44.699682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.482 qpair failed and we were unable to recover it. 00:34:14.482 [2024-07-26 01:16:44.699783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.482 [2024-07-26 01:16:44.699810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.482 qpair failed and we were unable to recover it. 00:34:14.482 [2024-07-26 01:16:44.699965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.482 [2024-07-26 01:16:44.699991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.482 qpair failed and we were unable to recover it. 00:34:14.482 [2024-07-26 01:16:44.700095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.482 [2024-07-26 01:16:44.700120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.482 qpair failed and we were unable to recover it. 00:34:14.482 [2024-07-26 01:16:44.700252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.482 [2024-07-26 01:16:44.700278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.483 qpair failed and we were unable to recover it. 00:34:14.483 [2024-07-26 01:16:44.700426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.483 [2024-07-26 01:16:44.700452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.483 qpair failed and we were unable to recover it. 00:34:14.483 [2024-07-26 01:16:44.700563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.483 [2024-07-26 01:16:44.700590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.483 qpair failed and we were unable to recover it. 00:34:14.483 [2024-07-26 01:16:44.700719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.483 [2024-07-26 01:16:44.700745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.483 qpair failed and we were unable to recover it. 00:34:14.483 [2024-07-26 01:16:44.700853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.483 [2024-07-26 01:16:44.700881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.483 qpair failed and we were unable to recover it. 00:34:14.483 [2024-07-26 01:16:44.700987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.483 [2024-07-26 01:16:44.701015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.483 qpair failed and we were unable to recover it. 00:34:14.483 [2024-07-26 01:16:44.701156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.483 [2024-07-26 01:16:44.701184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.483 qpair failed and we were unable to recover it. 00:34:14.483 [2024-07-26 01:16:44.701308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.483 [2024-07-26 01:16:44.701352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.483 qpair failed and we were unable to recover it. 00:34:14.483 [2024-07-26 01:16:44.701536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.483 [2024-07-26 01:16:44.701564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.483 qpair failed and we were unable to recover it. 00:34:14.483 [2024-07-26 01:16:44.701706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.484 [2024-07-26 01:16:44.701732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.484 qpair failed and we were unable to recover it. 00:34:14.484 [2024-07-26 01:16:44.701850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.484 [2024-07-26 01:16:44.701877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.484 qpair failed and we were unable to recover it. 00:34:14.484 [2024-07-26 01:16:44.701986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.484 [2024-07-26 01:16:44.702012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.484 qpair failed and we were unable to recover it. 00:34:14.484 [2024-07-26 01:16:44.702119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.484 [2024-07-26 01:16:44.702145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.484 qpair failed and we were unable to recover it. 00:34:14.484 [2024-07-26 01:16:44.702259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.484 [2024-07-26 01:16:44.702285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.484 qpair failed and we were unable to recover it. 00:34:14.484 [2024-07-26 01:16:44.702390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.484 [2024-07-26 01:16:44.702416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.484 qpair failed and we were unable to recover it. 00:34:14.484 [2024-07-26 01:16:44.702573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.484 [2024-07-26 01:16:44.702599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.484 qpair failed and we were unable to recover it. 00:34:14.484 [2024-07-26 01:16:44.702705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.484 [2024-07-26 01:16:44.702733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.484 qpair failed and we were unable to recover it. 00:34:14.484 [2024-07-26 01:16:44.702875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.484 [2024-07-26 01:16:44.702902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.484 qpair failed and we were unable to recover it. 00:34:14.484 [2024-07-26 01:16:44.703015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.484 [2024-07-26 01:16:44.703043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.484 qpair failed and we were unable to recover it. 00:34:14.484 [2024-07-26 01:16:44.703190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.485 [2024-07-26 01:16:44.703222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.485 qpair failed and we were unable to recover it. 00:34:14.485 [2024-07-26 01:16:44.703384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.485 [2024-07-26 01:16:44.703411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.485 qpair failed and we were unable to recover it. 00:34:14.485 [2024-07-26 01:16:44.703527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.485 [2024-07-26 01:16:44.703554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.485 qpair failed and we were unable to recover it. 00:34:14.485 [2024-07-26 01:16:44.703690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.485 [2024-07-26 01:16:44.703717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.485 qpair failed and we were unable to recover it. 00:34:14.485 [2024-07-26 01:16:44.703881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.485 [2024-07-26 01:16:44.703908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.485 qpair failed and we were unable to recover it. 00:34:14.485 [2024-07-26 01:16:44.704035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.485 [2024-07-26 01:16:44.704066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.485 qpair failed and we were unable to recover it. 00:34:14.485 [2024-07-26 01:16:44.704189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.485 [2024-07-26 01:16:44.704216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.485 qpair failed and we were unable to recover it. 00:34:14.485 [2024-07-26 01:16:44.704359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.485 [2024-07-26 01:16:44.704385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.485 qpair failed and we were unable to recover it. 00:34:14.485 [2024-07-26 01:16:44.704524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.485 [2024-07-26 01:16:44.704551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.486 qpair failed and we were unable to recover it. 00:34:14.486 [2024-07-26 01:16:44.704698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.486 [2024-07-26 01:16:44.704725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.486 qpair failed and we were unable to recover it. 00:34:14.486 [2024-07-26 01:16:44.704823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.486 [2024-07-26 01:16:44.704849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.486 qpair failed and we were unable to recover it. 00:34:14.486 [2024-07-26 01:16:44.704956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.486 [2024-07-26 01:16:44.704982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.486 qpair failed and we were unable to recover it. 00:34:14.486 [2024-07-26 01:16:44.705122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.486 [2024-07-26 01:16:44.705149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.486 qpair failed and we were unable to recover it. 00:34:14.486 [2024-07-26 01:16:44.705310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.486 [2024-07-26 01:16:44.705339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.486 qpair failed and we were unable to recover it. 00:34:14.486 [2024-07-26 01:16:44.705450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.486 [2024-07-26 01:16:44.705477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.486 qpair failed and we were unable to recover it. 00:34:14.486 [2024-07-26 01:16:44.705577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.486 [2024-07-26 01:16:44.705603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.486 qpair failed and we were unable to recover it. 00:34:14.486 [2024-07-26 01:16:44.705707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.486 [2024-07-26 01:16:44.705733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.486 qpair failed and we were unable to recover it. 00:34:14.486 [2024-07-26 01:16:44.705842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.487 [2024-07-26 01:16:44.705869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.487 qpair failed and we were unable to recover it. 00:34:14.487 [2024-07-26 01:16:44.706000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.487 [2024-07-26 01:16:44.706026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.487 qpair failed and we were unable to recover it. 00:34:14.487 [2024-07-26 01:16:44.706177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.487 [2024-07-26 01:16:44.706204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.487 qpair failed and we were unable to recover it. 00:34:14.487 [2024-07-26 01:16:44.706364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.487 [2024-07-26 01:16:44.706391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.487 qpair failed and we were unable to recover it. 00:34:14.487 [2024-07-26 01:16:44.706498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.487 [2024-07-26 01:16:44.706525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.487 qpair failed and we were unable to recover it. 00:34:14.487 [2024-07-26 01:16:44.706661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.487 [2024-07-26 01:16:44.706688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.487 qpair failed and we were unable to recover it. 00:34:14.487 [2024-07-26 01:16:44.706798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.487 [2024-07-26 01:16:44.706824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.487 qpair failed and we were unable to recover it. 00:34:14.487 [2024-07-26 01:16:44.706957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.487 [2024-07-26 01:16:44.706983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.488 qpair failed and we were unable to recover it. 00:34:14.488 [2024-07-26 01:16:44.707159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.488 [2024-07-26 01:16:44.707185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.488 qpair failed and we were unable to recover it. 00:34:14.488 [2024-07-26 01:16:44.707295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.488 [2024-07-26 01:16:44.707325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.488 qpair failed and we were unable to recover it. 00:34:14.488 [2024-07-26 01:16:44.707462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.488 [2024-07-26 01:16:44.707487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.488 qpair failed and we were unable to recover it. 00:34:14.488 [2024-07-26 01:16:44.707629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.488 [2024-07-26 01:16:44.707656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.488 qpair failed and we were unable to recover it. 00:34:14.488 [2024-07-26 01:16:44.707813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.488 [2024-07-26 01:16:44.707839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.488 qpair failed and we were unable to recover it. 00:34:14.488 [2024-07-26 01:16:44.707938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.488 [2024-07-26 01:16:44.707964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.488 qpair failed and we were unable to recover it. 00:34:14.488 [2024-07-26 01:16:44.708108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.489 [2024-07-26 01:16:44.708135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.489 qpair failed and we were unable to recover it. 00:34:14.489 [2024-07-26 01:16:44.708234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.489 [2024-07-26 01:16:44.708261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.489 qpair failed and we were unable to recover it. 00:34:14.489 [2024-07-26 01:16:44.708396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.489 [2024-07-26 01:16:44.708430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.489 qpair failed and we were unable to recover it. 00:34:14.489 [2024-07-26 01:16:44.708548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.489 [2024-07-26 01:16:44.708575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.489 qpair failed and we were unable to recover it. 00:34:14.489 [2024-07-26 01:16:44.708706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.489 [2024-07-26 01:16:44.708736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.489 qpair failed and we were unable to recover it. 00:34:14.489 [2024-07-26 01:16:44.708867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.489 [2024-07-26 01:16:44.708894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.489 qpair failed and we were unable to recover it. 00:34:14.489 [2024-07-26 01:16:44.709004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.489 [2024-07-26 01:16:44.709031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.490 qpair failed and we were unable to recover it. 00:34:14.490 [2024-07-26 01:16:44.709140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.490 [2024-07-26 01:16:44.709166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.490 qpair failed and we were unable to recover it. 00:34:14.490 [2024-07-26 01:16:44.709291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.490 [2024-07-26 01:16:44.709331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.490 qpair failed and we were unable to recover it. 00:34:14.490 [2024-07-26 01:16:44.709480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.491 [2024-07-26 01:16:44.709508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.491 qpair failed and we were unable to recover it. 00:34:14.491 [2024-07-26 01:16:44.709653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.491 [2024-07-26 01:16:44.709680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.491 qpair failed and we were unable to recover it. 00:34:14.491 [2024-07-26 01:16:44.709825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.491 [2024-07-26 01:16:44.709853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.491 qpair failed and we were unable to recover it. 00:34:14.491 [2024-07-26 01:16:44.710013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.491 [2024-07-26 01:16:44.710039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.491 qpair failed and we were unable to recover it. 00:34:14.491 [2024-07-26 01:16:44.710194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.491 [2024-07-26 01:16:44.710221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.491 qpair failed and we were unable to recover it. 00:34:14.491 [2024-07-26 01:16:44.710383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.491 [2024-07-26 01:16:44.710411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.491 qpair failed and we were unable to recover it. 00:34:14.491 [2024-07-26 01:16:44.710547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.491 [2024-07-26 01:16:44.710574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.491 qpair failed and we were unable to recover it. 00:34:14.491 [2024-07-26 01:16:44.710681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.491 [2024-07-26 01:16:44.710708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.491 qpair failed and we were unable to recover it. 00:34:14.491 [2024-07-26 01:16:44.710873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.491 [2024-07-26 01:16:44.710900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.491 qpair failed and we were unable to recover it. 00:34:14.491 [2024-07-26 01:16:44.711009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.491 [2024-07-26 01:16:44.711035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.491 qpair failed and we were unable to recover it. 00:34:14.492 [2024-07-26 01:16:44.711156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.492 [2024-07-26 01:16:44.711184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.492 qpair failed and we were unable to recover it. 00:34:14.492 [2024-07-26 01:16:44.711317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.492 [2024-07-26 01:16:44.711344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.492 qpair failed and we were unable to recover it. 00:34:14.492 [2024-07-26 01:16:44.711503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.492 [2024-07-26 01:16:44.711529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.492 qpair failed and we were unable to recover it. 00:34:14.492 [2024-07-26 01:16:44.711636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.492 [2024-07-26 01:16:44.711662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.492 qpair failed and we were unable to recover it. 00:34:14.492 [2024-07-26 01:16:44.711806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.492 [2024-07-26 01:16:44.711835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.492 qpair failed and we were unable to recover it. 00:34:14.492 [2024-07-26 01:16:44.711981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.492 [2024-07-26 01:16:44.712008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.492 qpair failed and we were unable to recover it. 00:34:14.492 [2024-07-26 01:16:44.712167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.492 [2024-07-26 01:16:44.712195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.492 qpair failed and we were unable to recover it. 00:34:14.492 [2024-07-26 01:16:44.712336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.493 [2024-07-26 01:16:44.712363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.493 qpair failed and we were unable to recover it. 00:34:14.493 [2024-07-26 01:16:44.712468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.493 [2024-07-26 01:16:44.712495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.493 qpair failed and we were unable to recover it. 00:34:14.493 [2024-07-26 01:16:44.712598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.493 [2024-07-26 01:16:44.712625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.493 qpair failed and we were unable to recover it. 00:34:14.493 [2024-07-26 01:16:44.712789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.493 [2024-07-26 01:16:44.712817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.493 qpair failed and we were unable to recover it. 00:34:14.493 [2024-07-26 01:16:44.712925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.493 [2024-07-26 01:16:44.712952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.493 qpair failed and we were unable to recover it. 00:34:14.493 [2024-07-26 01:16:44.713112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.493 [2024-07-26 01:16:44.713138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.493 qpair failed and we were unable to recover it. 00:34:14.493 [2024-07-26 01:16:44.713242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.493 [2024-07-26 01:16:44.713268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.493 qpair failed and we were unable to recover it. 00:34:14.493 [2024-07-26 01:16:44.713398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.493 [2024-07-26 01:16:44.713424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.493 qpair failed and we were unable to recover it. 00:34:14.494 [2024-07-26 01:16:44.713579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.494 [2024-07-26 01:16:44.713605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.494 qpair failed and we were unable to recover it. 00:34:14.494 [2024-07-26 01:16:44.713738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.494 [2024-07-26 01:16:44.713764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.494 qpair failed and we were unable to recover it. 00:34:14.494 [2024-07-26 01:16:44.713892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.494 [2024-07-26 01:16:44.713918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.494 qpair failed and we were unable to recover it. 00:34:14.494 [2024-07-26 01:16:44.714063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.494 [2024-07-26 01:16:44.714090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.494 qpair failed and we were unable to recover it. 00:34:14.494 [2024-07-26 01:16:44.714226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.494 [2024-07-26 01:16:44.714252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.494 qpair failed and we were unable to recover it. 00:34:14.494 [2024-07-26 01:16:44.714396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.494 [2024-07-26 01:16:44.714422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.494 qpair failed and we were unable to recover it. 00:34:14.494 [2024-07-26 01:16:44.714556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.494 [2024-07-26 01:16:44.714583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.494 qpair failed and we were unable to recover it. 00:34:14.494 [2024-07-26 01:16:44.714718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.494 [2024-07-26 01:16:44.714744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.494 qpair failed and we were unable to recover it. 00:34:14.494 [2024-07-26 01:16:44.714878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.495 [2024-07-26 01:16:44.714905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.495 qpair failed and we were unable to recover it. 00:34:14.495 [2024-07-26 01:16:44.715044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.495 [2024-07-26 01:16:44.715075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.495 qpair failed and we were unable to recover it. 00:34:14.495 [2024-07-26 01:16:44.715236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.495 [2024-07-26 01:16:44.715262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.495 qpair failed and we were unable to recover it. 00:34:14.495 [2024-07-26 01:16:44.715402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.495 [2024-07-26 01:16:44.715428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.495 qpair failed and we were unable to recover it. 00:34:14.495 [2024-07-26 01:16:44.715563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.495 [2024-07-26 01:16:44.715589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.495 qpair failed and we were unable to recover it. 00:34:14.495 [2024-07-26 01:16:44.715699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.495 [2024-07-26 01:16:44.715725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.495 qpair failed and we were unable to recover it. 00:34:14.495 [2024-07-26 01:16:44.715869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.495 [2024-07-26 01:16:44.715896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.495 qpair failed and we were unable to recover it. 00:34:14.495 [2024-07-26 01:16:44.716002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.495 [2024-07-26 01:16:44.716028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.495 qpair failed and we were unable to recover it. 00:34:14.495 [2024-07-26 01:16:44.716194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.496 [2024-07-26 01:16:44.716221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.496 qpair failed and we were unable to recover it. 00:34:14.496 [2024-07-26 01:16:44.716361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.496 [2024-07-26 01:16:44.716388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.496 qpair failed and we were unable to recover it. 00:34:14.496 [2024-07-26 01:16:44.716516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.496 [2024-07-26 01:16:44.716542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.496 qpair failed and we were unable to recover it. 00:34:14.496 [2024-07-26 01:16:44.716666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.496 [2024-07-26 01:16:44.716692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.496 qpair failed and we were unable to recover it. 00:34:14.496 [2024-07-26 01:16:44.716798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.496 [2024-07-26 01:16:44.716825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.496 qpair failed and we were unable to recover it. 00:34:14.496 [2024-07-26 01:16:44.716939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.496 [2024-07-26 01:16:44.716965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.496 qpair failed and we were unable to recover it. 00:34:14.496 [2024-07-26 01:16:44.717103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.496 [2024-07-26 01:16:44.717129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.496 qpair failed and we were unable to recover it. 00:34:14.496 [2024-07-26 01:16:44.717263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.496 [2024-07-26 01:16:44.717290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.496 qpair failed and we were unable to recover it. 00:34:14.496 [2024-07-26 01:16:44.717426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.496 [2024-07-26 01:16:44.717452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.496 qpair failed and we were unable to recover it. 00:34:14.496 [2024-07-26 01:16:44.717597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.496 [2024-07-26 01:16:44.717623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.496 qpair failed and we were unable to recover it. 00:34:14.496 [2024-07-26 01:16:44.717757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.496 [2024-07-26 01:16:44.717783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.496 qpair failed and we were unable to recover it. 00:34:14.496 [2024-07-26 01:16:44.717923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.496 [2024-07-26 01:16:44.717950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.496 qpair failed and we were unable to recover it. 00:34:14.496 [2024-07-26 01:16:44.718078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.496 [2024-07-26 01:16:44.718119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.496 qpair failed and we were unable to recover it. 00:34:14.496 [2024-07-26 01:16:44.718252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.496 [2024-07-26 01:16:44.718281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.496 qpair failed and we were unable to recover it. 00:34:14.496 [2024-07-26 01:16:44.718419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.496 [2024-07-26 01:16:44.718452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.496 qpair failed and we were unable to recover it. 00:34:14.496 [2024-07-26 01:16:44.718593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.496 [2024-07-26 01:16:44.718621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.496 qpair failed and we were unable to recover it. 00:34:14.496 [2024-07-26 01:16:44.718728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.496 [2024-07-26 01:16:44.718756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.496 qpair failed and we were unable to recover it. 00:34:14.496 [2024-07-26 01:16:44.718891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.718919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.719035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.719066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.719182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.719208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.719315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.719341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.719450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.719476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.719607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.719633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.719780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.719807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.719911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.719937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.720099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.720126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.720239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.720266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.720367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.720393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.720525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.720551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.720697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.720723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.720854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.720880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.721016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.721042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.721156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.721183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.721295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.721322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.721457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.721483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.721633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.721661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.721763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.721789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.721889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.721925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.722034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.722064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.722225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.722251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.722413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.722439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.722546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.722576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.722678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.722704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.722818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.722844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.722960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.722986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.723126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.723152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.723285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.723311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.723410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.723437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.723539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.723565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.723698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.723724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.723840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.723866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.723965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.723992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.724103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.724130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.724229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.724254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.724397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.724423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.724533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.724560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.724688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.724714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.724873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.724899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.725002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.725029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.725148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.725174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.725349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.725390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.725527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.725557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.725719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.725747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.725884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.725912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.726058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.726090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.726200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.726228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.726366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.726393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.726524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.726552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.726662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.726694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.726833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.726860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.727009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.727051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.727203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.727232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.727338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.727365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.727484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.727511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.727643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.727669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.727809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.727836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.727969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.497 [2024-07-26 01:16:44.727995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.497 qpair failed and we were unable to recover it. 00:34:14.497 [2024-07-26 01:16:44.728107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.728134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.728268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.728294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.728429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.728457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.728599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.728626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.728787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.728814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.728977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.729004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.729140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.729168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.729307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.729335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.729469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.729496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.729610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.729638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.729756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.729784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.729923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.729949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.730085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.730113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.730274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.730300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.730438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.730464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.730571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.730597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.730737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.730764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.730920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.730960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.731084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.731122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.731228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.731254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.731361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.731387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.731502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.731529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.731674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.731700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.731859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.731885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.732016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.732042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.732159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.732188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.732346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.732373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.732505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.732532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.732694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.732721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.732858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.732884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.733042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.733074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.733180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.733207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.733346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.733373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.733509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.733536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.733676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.733703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.733841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.733867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.733969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.733995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.734115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.734155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.734273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.734301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.734459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.734485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.734591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.734625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.734730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.734756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.734886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.734912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.735047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.735079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.735200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.735227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.735353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.735393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.735557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.735585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.735695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.735722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.735863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.735891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.736031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.736067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.736201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.736228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.736361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.498 [2024-07-26 01:16:44.736388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.498 qpair failed and we were unable to recover it. 00:34:14.498 [2024-07-26 01:16:44.736525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.499 [2024-07-26 01:16:44.736552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.499 qpair failed and we were unable to recover it. 00:34:14.499 [2024-07-26 01:16:44.736673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.499 [2024-07-26 01:16:44.736700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.499 qpair failed and we were unable to recover it. 00:34:14.499 [2024-07-26 01:16:44.736834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.499 [2024-07-26 01:16:44.736862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.499 qpair failed and we were unable to recover it. 00:34:14.499 [2024-07-26 01:16:44.737002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.499 [2024-07-26 01:16:44.737029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.499 qpair failed and we were unable to recover it. 00:34:14.499 [2024-07-26 01:16:44.737148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.499 [2024-07-26 01:16:44.737174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.499 qpair failed and we were unable to recover it. 00:34:14.499 [2024-07-26 01:16:44.737272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.499 [2024-07-26 01:16:44.737298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.499 qpair failed and we were unable to recover it. 00:34:14.499 [2024-07-26 01:16:44.737397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.499 [2024-07-26 01:16:44.737423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.499 qpair failed and we were unable to recover it. 00:34:14.499 [2024-07-26 01:16:44.737559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.499 [2024-07-26 01:16:44.737585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.499 qpair failed and we were unable to recover it. 00:34:14.499 [2024-07-26 01:16:44.737693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.499 [2024-07-26 01:16:44.737719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.499 qpair failed and we were unable to recover it. 00:34:14.499 [2024-07-26 01:16:44.737835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.499 [2024-07-26 01:16:44.737861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.499 qpair failed and we were unable to recover it. 00:34:14.499 [2024-07-26 01:16:44.737991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.499 [2024-07-26 01:16:44.738017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.499 qpair failed and we were unable to recover it. 00:34:14.499 [2024-07-26 01:16:44.738162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.499 [2024-07-26 01:16:44.738191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.499 qpair failed and we were unable to recover it. 00:34:14.499 [2024-07-26 01:16:44.738331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.499 [2024-07-26 01:16:44.738360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.499 qpair failed and we were unable to recover it. 00:34:14.499 [2024-07-26 01:16:44.738499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.499 [2024-07-26 01:16:44.738527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.499 qpair failed and we were unable to recover it. 00:34:14.499 [2024-07-26 01:16:44.738665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.499 [2024-07-26 01:16:44.738692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.499 qpair failed and we were unable to recover it. 00:34:14.499 [2024-07-26 01:16:44.738818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.499 [2024-07-26 01:16:44.738845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.499 qpair failed and we were unable to recover it. 00:34:14.499 [2024-07-26 01:16:44.738982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.499 [2024-07-26 01:16:44.739010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.499 qpair failed and we were unable to recover it. 00:34:14.499 [2024-07-26 01:16:44.739119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.499 [2024-07-26 01:16:44.739147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.499 qpair failed and we were unable to recover it. 00:34:14.499 [2024-07-26 01:16:44.739280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.499 [2024-07-26 01:16:44.739306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.499 qpair failed and we were unable to recover it. 00:34:14.499 [2024-07-26 01:16:44.739446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.499 [2024-07-26 01:16:44.739472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.499 qpair failed and we were unable to recover it. 00:34:14.499 [2024-07-26 01:16:44.739616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.499 [2024-07-26 01:16:44.739644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.499 qpair failed and we were unable to recover it. 00:34:14.499 [2024-07-26 01:16:44.739760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.499 [2024-07-26 01:16:44.739788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.499 qpair failed and we were unable to recover it. 00:34:14.499 [2024-07-26 01:16:44.739921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.499 [2024-07-26 01:16:44.739948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.499 qpair failed and we were unable to recover it. 00:34:14.499 [2024-07-26 01:16:44.740112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.499 [2024-07-26 01:16:44.740140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.499 qpair failed and we were unable to recover it. 00:34:14.499 [2024-07-26 01:16:44.740242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.499 [2024-07-26 01:16:44.740269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.499 qpair failed and we were unable to recover it. 00:34:14.499 [2024-07-26 01:16:44.740404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.499 [2024-07-26 01:16:44.740432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.499 qpair failed and we were unable to recover it. 00:34:14.499 [2024-07-26 01:16:44.740593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.499 [2024-07-26 01:16:44.740620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.499 qpair failed and we were unable to recover it. 00:34:14.499 [2024-07-26 01:16:44.740753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.499 [2024-07-26 01:16:44.740780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.499 qpair failed and we were unable to recover it. 00:34:14.499 [2024-07-26 01:16:44.740890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.499 [2024-07-26 01:16:44.740917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.499 qpair failed and we were unable to recover it. 00:34:14.499 [2024-07-26 01:16:44.741057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.499 [2024-07-26 01:16:44.741092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.499 qpair failed and we were unable to recover it. 00:34:14.499 [2024-07-26 01:16:44.741260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.499 [2024-07-26 01:16:44.741286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.499 qpair failed and we were unable to recover it. 00:34:14.499 [2024-07-26 01:16:44.741397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.499 [2024-07-26 01:16:44.741424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.499 qpair failed and we were unable to recover it. 00:34:14.499 [2024-07-26 01:16:44.741556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.500 [2024-07-26 01:16:44.741582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.500 qpair failed and we were unable to recover it. 00:34:14.500 [2024-07-26 01:16:44.741739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.500 [2024-07-26 01:16:44.741769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.500 qpair failed and we were unable to recover it. 00:34:14.500 [2024-07-26 01:16:44.741899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.500 [2024-07-26 01:16:44.741925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.500 qpair failed and we were unable to recover it. 00:34:14.500 [2024-07-26 01:16:44.742031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.500 [2024-07-26 01:16:44.742064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.500 qpair failed and we were unable to recover it. 00:34:14.500 [2024-07-26 01:16:44.742203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.500 [2024-07-26 01:16:44.742230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.500 qpair failed and we were unable to recover it. 00:34:14.500 [2024-07-26 01:16:44.742340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.500 [2024-07-26 01:16:44.742367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.500 qpair failed and we were unable to recover it. 00:34:14.500 [2024-07-26 01:16:44.742499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.500 [2024-07-26 01:16:44.742526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.500 qpair failed and we were unable to recover it. 00:34:14.500 [2024-07-26 01:16:44.742642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.500 [2024-07-26 01:16:44.742670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.500 qpair failed and we were unable to recover it. 00:34:14.500 [2024-07-26 01:16:44.742814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.500 [2024-07-26 01:16:44.742842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.500 qpair failed and we were unable to recover it. 00:34:14.500 [2024-07-26 01:16:44.742978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.500 [2024-07-26 01:16:44.743005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.500 qpair failed and we were unable to recover it. 00:34:14.500 [2024-07-26 01:16:44.743132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.500 [2024-07-26 01:16:44.743160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.500 qpair failed and we were unable to recover it. 00:34:14.500 [2024-07-26 01:16:44.743298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.500 [2024-07-26 01:16:44.743337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.500 qpair failed and we were unable to recover it. 00:34:14.500 [2024-07-26 01:16:44.743467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.500 [2024-07-26 01:16:44.743494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.500 qpair failed and we were unable to recover it. 00:34:14.500 [2024-07-26 01:16:44.743626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.500 [2024-07-26 01:16:44.743653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.500 qpair failed and we were unable to recover it. 00:34:14.500 [2024-07-26 01:16:44.743790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.500 [2024-07-26 01:16:44.743817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.500 qpair failed and we were unable to recover it. 00:34:14.500 [2024-07-26 01:16:44.743961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.501 [2024-07-26 01:16:44.743988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.501 qpair failed and we were unable to recover it. 00:34:14.501 [2024-07-26 01:16:44.744147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.501 [2024-07-26 01:16:44.744175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.501 qpair failed and we were unable to recover it. 00:34:14.501 [2024-07-26 01:16:44.744308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.501 [2024-07-26 01:16:44.744334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.501 qpair failed and we were unable to recover it. 00:34:14.501 [2024-07-26 01:16:44.744444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.501 [2024-07-26 01:16:44.744471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.501 qpair failed and we were unable to recover it. 00:34:14.501 [2024-07-26 01:16:44.744601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.501 [2024-07-26 01:16:44.744628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.501 qpair failed and we were unable to recover it. 00:34:14.501 [2024-07-26 01:16:44.744765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.501 [2024-07-26 01:16:44.744794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.501 qpair failed and we were unable to recover it. 00:34:14.501 [2024-07-26 01:16:44.744912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.501 [2024-07-26 01:16:44.744950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.501 qpair failed and we were unable to recover it. 00:34:14.501 [2024-07-26 01:16:44.745100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.501 [2024-07-26 01:16:44.745127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.501 qpair failed and we were unable to recover it. 00:34:14.501 [2024-07-26 01:16:44.745245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.501 [2024-07-26 01:16:44.745284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.501 qpair failed and we were unable to recover it. 00:34:14.501 [2024-07-26 01:16:44.745428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.501 [2024-07-26 01:16:44.745455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.501 qpair failed and we were unable to recover it. 00:34:14.501 [2024-07-26 01:16:44.745572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.501 [2024-07-26 01:16:44.745599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.502 qpair failed and we were unable to recover it. 00:34:14.502 [2024-07-26 01:16:44.745706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.502 [2024-07-26 01:16:44.745732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.502 qpair failed and we were unable to recover it. 00:34:14.502 [2024-07-26 01:16:44.745884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.502 [2024-07-26 01:16:44.745911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.502 qpair failed and we were unable to recover it. 00:34:14.502 [2024-07-26 01:16:44.746069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.502 [2024-07-26 01:16:44.746100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.502 qpair failed and we were unable to recover it. 00:34:14.502 [2024-07-26 01:16:44.746212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.502 [2024-07-26 01:16:44.746239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.502 qpair failed and we were unable to recover it. 00:34:14.502 [2024-07-26 01:16:44.746350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.502 [2024-07-26 01:16:44.746376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.502 qpair failed and we were unable to recover it. 00:34:14.502 [2024-07-26 01:16:44.746536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.502 [2024-07-26 01:16:44.746562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.502 qpair failed and we were unable to recover it. 00:34:14.502 [2024-07-26 01:16:44.746668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.502 [2024-07-26 01:16:44.746694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.502 qpair failed and we were unable to recover it. 00:34:14.502 [2024-07-26 01:16:44.746798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.502 [2024-07-26 01:16:44.746825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.502 qpair failed and we were unable to recover it. 00:34:14.503 [2024-07-26 01:16:44.746945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.503 [2024-07-26 01:16:44.746986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.503 qpair failed and we were unable to recover it. 00:34:14.503 [2024-07-26 01:16:44.747145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.503 [2024-07-26 01:16:44.747185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.503 qpair failed and we were unable to recover it. 00:34:14.503 [2024-07-26 01:16:44.747306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.503 [2024-07-26 01:16:44.747333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.503 qpair failed and we were unable to recover it. 00:34:14.503 [2024-07-26 01:16:44.747443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.503 [2024-07-26 01:16:44.747470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.503 qpair failed and we were unable to recover it. 00:34:14.503 [2024-07-26 01:16:44.747617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.503 [2024-07-26 01:16:44.747643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.503 qpair failed and we were unable to recover it. 00:34:14.503 [2024-07-26 01:16:44.747808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.503 [2024-07-26 01:16:44.747834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.503 qpair failed and we were unable to recover it. 00:34:14.503 [2024-07-26 01:16:44.747952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.503 [2024-07-26 01:16:44.747978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.503 qpair failed and we were unable to recover it. 00:34:14.503 [2024-07-26 01:16:44.748075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.503 [2024-07-26 01:16:44.748101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.503 qpair failed and we were unable to recover it. 00:34:14.504 [2024-07-26 01:16:44.748234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.504 [2024-07-26 01:16:44.748260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.504 qpair failed and we were unable to recover it. 00:34:14.504 [2024-07-26 01:16:44.748373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.504 [2024-07-26 01:16:44.748399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.504 qpair failed and we were unable to recover it. 00:34:14.504 [2024-07-26 01:16:44.748535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.504 [2024-07-26 01:16:44.748561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.504 qpair failed and we were unable to recover it. 00:34:14.504 [2024-07-26 01:16:44.748697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.504 [2024-07-26 01:16:44.748723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.504 qpair failed and we were unable to recover it. 00:34:14.504 [2024-07-26 01:16:44.748849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.504 [2024-07-26 01:16:44.748875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.504 qpair failed and we were unable to recover it. 00:34:14.504 [2024-07-26 01:16:44.749010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.504 [2024-07-26 01:16:44.749036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.504 qpair failed and we were unable to recover it. 00:34:14.504 [2024-07-26 01:16:44.749150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.504 [2024-07-26 01:16:44.749179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.505 qpair failed and we were unable to recover it. 00:34:14.505 [2024-07-26 01:16:44.749289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.505 [2024-07-26 01:16:44.749316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.505 qpair failed and we were unable to recover it. 00:34:14.505 [2024-07-26 01:16:44.749456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.505 [2024-07-26 01:16:44.749482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.505 qpair failed and we were unable to recover it. 00:34:14.505 [2024-07-26 01:16:44.749588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.505 [2024-07-26 01:16:44.749615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.505 qpair failed and we were unable to recover it. 00:34:14.505 [2024-07-26 01:16:44.749721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.505 [2024-07-26 01:16:44.749747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.505 qpair failed and we were unable to recover it. 00:34:14.505 [2024-07-26 01:16:44.749880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.505 [2024-07-26 01:16:44.749907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.505 qpair failed and we were unable to recover it. 00:34:14.505 [2024-07-26 01:16:44.750037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.505 [2024-07-26 01:16:44.750070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.505 qpair failed and we were unable to recover it. 00:34:14.505 [2024-07-26 01:16:44.750254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.505 [2024-07-26 01:16:44.750295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.505 qpair failed and we were unable to recover it. 00:34:14.505 [2024-07-26 01:16:44.750440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.505 [2024-07-26 01:16:44.750467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.505 qpair failed and we were unable to recover it. 00:34:14.506 [2024-07-26 01:16:44.750599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.506 [2024-07-26 01:16:44.750626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.506 qpair failed and we were unable to recover it. 00:34:14.506 [2024-07-26 01:16:44.750731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.506 [2024-07-26 01:16:44.750758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.506 qpair failed and we were unable to recover it. 00:34:14.506 [2024-07-26 01:16:44.750878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.506 [2024-07-26 01:16:44.750906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.506 qpair failed and we were unable to recover it. 00:34:14.506 [2024-07-26 01:16:44.751031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.506 [2024-07-26 01:16:44.751077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.506 qpair failed and we were unable to recover it. 00:34:14.506 [2024-07-26 01:16:44.751187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.506 [2024-07-26 01:16:44.751214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.507 qpair failed and we were unable to recover it. 00:34:14.507 [2024-07-26 01:16:44.751374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.507 [2024-07-26 01:16:44.751414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.507 qpair failed and we were unable to recover it. 00:34:14.507 [2024-07-26 01:16:44.751575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.507 [2024-07-26 01:16:44.751603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.507 qpair failed and we were unable to recover it. 00:34:14.507 [2024-07-26 01:16:44.751742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.507 [2024-07-26 01:16:44.751773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.507 qpair failed and we were unable to recover it. 00:34:14.507 [2024-07-26 01:16:44.751932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.507 [2024-07-26 01:16:44.751959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.507 qpair failed and we were unable to recover it. 00:34:14.507 [2024-07-26 01:16:44.752093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.507 [2024-07-26 01:16:44.752130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.507 qpair failed and we were unable to recover it. 00:34:14.507 [2024-07-26 01:16:44.752268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.507 [2024-07-26 01:16:44.752296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.507 qpair failed and we were unable to recover it. 00:34:14.507 [2024-07-26 01:16:44.752466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.507 [2024-07-26 01:16:44.752498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.507 qpair failed and we were unable to recover it. 00:34:14.508 [2024-07-26 01:16:44.752659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.508 [2024-07-26 01:16:44.752686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.508 qpair failed and we were unable to recover it. 00:34:14.508 [2024-07-26 01:16:44.752824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.508 [2024-07-26 01:16:44.752852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.508 qpair failed and we were unable to recover it. 00:34:14.508 [2024-07-26 01:16:44.752973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.508 [2024-07-26 01:16:44.753000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.508 qpair failed and we were unable to recover it. 00:34:14.508 [2024-07-26 01:16:44.753120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.508 [2024-07-26 01:16:44.753148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.508 qpair failed and we were unable to recover it. 00:34:14.508 [2024-07-26 01:16:44.753297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.508 [2024-07-26 01:16:44.753324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.508 qpair failed and we were unable to recover it. 00:34:14.508 [2024-07-26 01:16:44.753457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.508 [2024-07-26 01:16:44.753484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.508 qpair failed and we were unable to recover it. 00:34:14.508 [2024-07-26 01:16:44.753624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.508 [2024-07-26 01:16:44.753650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.508 qpair failed and we were unable to recover it. 00:34:14.508 [2024-07-26 01:16:44.753766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.508 [2024-07-26 01:16:44.753794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.508 qpair failed and we were unable to recover it. 00:34:14.508 [2024-07-26 01:16:44.753944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.508 [2024-07-26 01:16:44.753972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.508 qpair failed and we were unable to recover it. 00:34:14.508 [2024-07-26 01:16:44.754139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.508 [2024-07-26 01:16:44.754167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.508 qpair failed and we were unable to recover it. 00:34:14.508 [2024-07-26 01:16:44.754270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.508 [2024-07-26 01:16:44.754297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.508 qpair failed and we were unable to recover it. 00:34:14.508 [2024-07-26 01:16:44.754431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.509 [2024-07-26 01:16:44.754458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.509 qpair failed and we were unable to recover it. 00:34:14.509 [2024-07-26 01:16:44.754563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.509 [2024-07-26 01:16:44.754589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.509 qpair failed and we were unable to recover it. 00:34:14.509 [2024-07-26 01:16:44.754759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.509 [2024-07-26 01:16:44.754786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.509 qpair failed and we were unable to recover it. 00:34:14.509 [2024-07-26 01:16:44.754903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.509 [2024-07-26 01:16:44.754943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.509 qpair failed and we were unable to recover it. 00:34:14.509 [2024-07-26 01:16:44.755050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.509 [2024-07-26 01:16:44.755084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.509 qpair failed and we were unable to recover it. 00:34:14.510 [2024-07-26 01:16:44.755226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.510 [2024-07-26 01:16:44.755253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.510 qpair failed and we were unable to recover it. 00:34:14.510 [2024-07-26 01:16:44.755392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.510 [2024-07-26 01:16:44.755418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.510 qpair failed and we were unable to recover it. 00:34:14.510 [2024-07-26 01:16:44.755523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.510 [2024-07-26 01:16:44.755549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.510 qpair failed and we were unable to recover it. 00:34:14.510 [2024-07-26 01:16:44.755658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.510 [2024-07-26 01:16:44.755684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.510 qpair failed and we were unable to recover it. 00:34:14.510 [2024-07-26 01:16:44.755816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.510 [2024-07-26 01:16:44.755842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.510 qpair failed and we were unable to recover it. 00:34:14.510 [2024-07-26 01:16:44.755979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.510 [2024-07-26 01:16:44.756005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.510 qpair failed and we were unable to recover it. 00:34:14.510 [2024-07-26 01:16:44.756117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.510 [2024-07-26 01:16:44.756143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.511 qpair failed and we were unable to recover it. 00:34:14.511 [2024-07-26 01:16:44.756273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.511 [2024-07-26 01:16:44.756299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.511 qpair failed and we were unable to recover it. 00:34:14.511 [2024-07-26 01:16:44.756440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.511 [2024-07-26 01:16:44.756467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.511 qpair failed and we were unable to recover it. 00:34:14.511 [2024-07-26 01:16:44.756640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.511 [2024-07-26 01:16:44.756667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.511 qpair failed and we were unable to recover it. 00:34:14.511 [2024-07-26 01:16:44.756805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.511 [2024-07-26 01:16:44.756837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.511 qpair failed and we were unable to recover it. 00:34:14.511 [2024-07-26 01:16:44.756990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.511 [2024-07-26 01:16:44.757030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.511 qpair failed and we were unable to recover it. 00:34:14.511 [2024-07-26 01:16:44.757168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.511 [2024-07-26 01:16:44.757209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.511 qpair failed and we were unable to recover it. 00:34:14.511 [2024-07-26 01:16:44.757325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.511 [2024-07-26 01:16:44.757354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.511 qpair failed and we were unable to recover it. 00:34:14.511 [2024-07-26 01:16:44.757498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.511 [2024-07-26 01:16:44.757525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.511 qpair failed and we were unable to recover it. 00:34:14.511 [2024-07-26 01:16:44.757637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.511 [2024-07-26 01:16:44.757663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.511 qpair failed and we were unable to recover it. 00:34:14.511 [2024-07-26 01:16:44.757809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.511 [2024-07-26 01:16:44.757836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.511 qpair failed and we were unable to recover it. 00:34:14.512 [2024-07-26 01:16:44.757965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.512 [2024-07-26 01:16:44.757992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.512 qpair failed and we were unable to recover it. 00:34:14.512 [2024-07-26 01:16:44.758140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.512 [2024-07-26 01:16:44.758181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.512 qpair failed and we were unable to recover it. 00:34:14.512 [2024-07-26 01:16:44.758353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.512 [2024-07-26 01:16:44.758381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.512 qpair failed and we were unable to recover it. 00:34:14.512 [2024-07-26 01:16:44.758493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.512 [2024-07-26 01:16:44.758519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.512 qpair failed and we were unable to recover it. 00:34:14.512 [2024-07-26 01:16:44.758681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.512 [2024-07-26 01:16:44.758707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.512 qpair failed and we were unable to recover it. 00:34:14.512 [2024-07-26 01:16:44.758823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.512 [2024-07-26 01:16:44.758849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.512 qpair failed and we were unable to recover it. 00:34:14.512 [2024-07-26 01:16:44.758991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.512 [2024-07-26 01:16:44.759031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.512 qpair failed and we were unable to recover it. 00:34:14.512 [2024-07-26 01:16:44.759159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.512 [2024-07-26 01:16:44.759186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.512 qpair failed and we were unable to recover it. 00:34:14.513 [2024-07-26 01:16:44.759297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.513 [2024-07-26 01:16:44.759324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.513 qpair failed and we were unable to recover it. 00:34:14.513 [2024-07-26 01:16:44.759483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.513 [2024-07-26 01:16:44.759509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.513 qpair failed and we were unable to recover it. 00:34:14.513 [2024-07-26 01:16:44.759671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.513 [2024-07-26 01:16:44.759701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.513 qpair failed and we were unable to recover it. 00:34:14.513 [2024-07-26 01:16:44.759865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.513 [2024-07-26 01:16:44.759892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.513 qpair failed and we were unable to recover it. 00:34:14.513 [2024-07-26 01:16:44.760002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.513 [2024-07-26 01:16:44.760028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.513 qpair failed and we were unable to recover it. 00:34:14.513 [2024-07-26 01:16:44.760168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.513 [2024-07-26 01:16:44.760198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.513 qpair failed and we were unable to recover it. 00:34:14.513 [2024-07-26 01:16:44.760369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.514 [2024-07-26 01:16:44.760396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.514 qpair failed and we were unable to recover it. 00:34:14.514 [2024-07-26 01:16:44.760515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.514 [2024-07-26 01:16:44.760549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.514 qpair failed and we were unable to recover it. 00:34:14.514 [2024-07-26 01:16:44.760688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.514 [2024-07-26 01:16:44.760715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.514 qpair failed and we were unable to recover it. 00:34:14.514 [2024-07-26 01:16:44.760830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.514 [2024-07-26 01:16:44.760857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.515 qpair failed and we were unable to recover it. 00:34:14.515 [2024-07-26 01:16:44.760995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.515 [2024-07-26 01:16:44.761021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.515 qpair failed and we were unable to recover it. 00:34:14.515 [2024-07-26 01:16:44.761173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.515 [2024-07-26 01:16:44.761201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.515 qpair failed and we were unable to recover it. 00:34:14.515 [2024-07-26 01:16:44.761315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.515 [2024-07-26 01:16:44.761346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.515 qpair failed and we were unable to recover it. 00:34:14.515 [2024-07-26 01:16:44.761512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.515 [2024-07-26 01:16:44.761539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.515 qpair failed and we were unable to recover it. 00:34:14.515 [2024-07-26 01:16:44.761672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.515 [2024-07-26 01:16:44.761699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.515 qpair failed and we were unable to recover it. 00:34:14.515 [2024-07-26 01:16:44.761848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.515 [2024-07-26 01:16:44.761874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.515 qpair failed and we were unable to recover it. 00:34:14.515 [2024-07-26 01:16:44.762036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.515 [2024-07-26 01:16:44.762071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.515 qpair failed and we were unable to recover it. 00:34:14.515 [2024-07-26 01:16:44.762224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.515 [2024-07-26 01:16:44.762250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.515 qpair failed and we were unable to recover it. 00:34:14.515 [2024-07-26 01:16:44.762409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.516 [2024-07-26 01:16:44.762436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.516 qpair failed and we were unable to recover it. 00:34:14.516 [2024-07-26 01:16:44.762541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.516 [2024-07-26 01:16:44.762567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.516 qpair failed and we were unable to recover it. 00:34:14.516 [2024-07-26 01:16:44.762678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.516 [2024-07-26 01:16:44.762705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.516 qpair failed and we were unable to recover it. 00:34:14.516 [2024-07-26 01:16:44.762830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.516 [2024-07-26 01:16:44.762857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.516 qpair failed and we were unable to recover it. 00:34:14.516 [2024-07-26 01:16:44.762988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.516 [2024-07-26 01:16:44.763014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.516 qpair failed and we were unable to recover it. 00:34:14.516 [2024-07-26 01:16:44.763151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.516 [2024-07-26 01:16:44.763178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.516 qpair failed and we were unable to recover it. 00:34:14.516 [2024-07-26 01:16:44.763342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.516 [2024-07-26 01:16:44.763369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.516 qpair failed and we were unable to recover it. 00:34:14.517 [2024-07-26 01:16:44.763508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.517 [2024-07-26 01:16:44.763535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.517 qpair failed and we were unable to recover it. 00:34:14.517 [2024-07-26 01:16:44.763671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.517 [2024-07-26 01:16:44.763697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.517 qpair failed and we were unable to recover it. 00:34:14.517 [2024-07-26 01:16:44.763802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.517 [2024-07-26 01:16:44.763829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.518 qpair failed and we were unable to recover it. 00:34:14.518 [2024-07-26 01:16:44.763992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.518 [2024-07-26 01:16:44.764018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.518 qpair failed and we were unable to recover it. 00:34:14.518 [2024-07-26 01:16:44.764134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.518 [2024-07-26 01:16:44.764161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.518 qpair failed and we were unable to recover it. 00:34:14.518 [2024-07-26 01:16:44.764272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.518 [2024-07-26 01:16:44.764299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.518 qpair failed and we were unable to recover it. 00:34:14.518 [2024-07-26 01:16:44.764474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.518 [2024-07-26 01:16:44.764501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.518 qpair failed and we were unable to recover it. 00:34:14.518 [2024-07-26 01:16:44.764602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.518 [2024-07-26 01:16:44.764629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.518 qpair failed and we were unable to recover it. 00:34:14.518 [2024-07-26 01:16:44.764791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.518 [2024-07-26 01:16:44.764818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.518 qpair failed and we were unable to recover it. 00:34:14.518 [2024-07-26 01:16:44.764925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.518 [2024-07-26 01:16:44.764952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.518 qpair failed and we were unable to recover it. 00:34:14.518 [2024-07-26 01:16:44.765121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.518 [2024-07-26 01:16:44.765149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.518 qpair failed and we were unable to recover it. 00:34:14.518 [2024-07-26 01:16:44.765280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.518 [2024-07-26 01:16:44.765306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.518 qpair failed and we were unable to recover it. 00:34:14.518 [2024-07-26 01:16:44.765469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.518 [2024-07-26 01:16:44.765495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.518 qpair failed and we were unable to recover it. 00:34:14.518 [2024-07-26 01:16:44.765624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.518 [2024-07-26 01:16:44.765650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.518 qpair failed and we were unable to recover it. 00:34:14.518 [2024-07-26 01:16:44.765785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.518 [2024-07-26 01:16:44.765812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.518 qpair failed and we were unable to recover it. 00:34:14.518 [2024-07-26 01:16:44.765976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.518 [2024-07-26 01:16:44.766003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.518 qpair failed and we were unable to recover it. 00:34:14.518 [2024-07-26 01:16:44.766149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.518 [2024-07-26 01:16:44.766177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.518 qpair failed and we were unable to recover it. 00:34:14.518 [2024-07-26 01:16:44.766285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.518 [2024-07-26 01:16:44.766313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.518 qpair failed and we were unable to recover it. 00:34:14.518 [2024-07-26 01:16:44.766417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.518 [2024-07-26 01:16:44.766444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.518 qpair failed and we were unable to recover it. 00:34:14.518 [2024-07-26 01:16:44.766621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.518 [2024-07-26 01:16:44.766648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.518 qpair failed and we were unable to recover it. 00:34:14.518 [2024-07-26 01:16:44.766757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.518 [2024-07-26 01:16:44.766784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.518 qpair failed and we were unable to recover it. 00:34:14.518 [2024-07-26 01:16:44.766948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.518 [2024-07-26 01:16:44.766974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.518 qpair failed and we were unable to recover it. 00:34:14.518 [2024-07-26 01:16:44.767114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.518 [2024-07-26 01:16:44.767142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.518 qpair failed and we were unable to recover it. 00:34:14.518 [2024-07-26 01:16:44.767286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.518 [2024-07-26 01:16:44.767313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.518 qpair failed and we were unable to recover it. 00:34:14.518 [2024-07-26 01:16:44.767447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.518 [2024-07-26 01:16:44.767473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.518 qpair failed and we were unable to recover it. 00:34:14.518 [2024-07-26 01:16:44.767604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.767631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.767781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.767821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.767947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.767991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.768128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.768156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.768291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.768317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.768449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.768475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.768611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.768637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.768743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.768769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.768903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.768929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.769072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.769098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.769197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.769223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.769360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.769386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.769516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.769542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.769686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.769715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.769837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.769864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.769998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.770025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.770176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.770203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.770338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.770366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.770475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.770502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.770641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.770668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.770805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.770832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.770966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.770993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.771165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.771193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.771332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.771359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.771502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.771529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.771668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.771695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.771834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.771860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.771977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.772004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.772118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.772145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.772265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.772305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.772447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.772474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.772612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.772639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.772751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.772777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.772926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.772966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.773112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.773141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.773258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.773286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.773400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.773428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.773567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.773595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.773730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.773758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.773927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.773953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.774115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.774142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.774279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.774306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.774467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.774498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.774662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.774689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.774802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.774829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.774960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.774986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.775120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.775147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.775292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.775319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.775431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.775458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.775620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.775647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.775812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.775839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.775980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.776010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.776154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.776182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.776316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.776342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.776503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.776529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.776662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.776690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.776833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.776860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.777000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.777026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.777177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.777204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.777367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.777393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.777528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.777555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.777663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.777690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.777857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.519 [2024-07-26 01:16:44.777883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.519 qpair failed and we were unable to recover it. 00:34:14.519 [2024-07-26 01:16:44.777995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.778022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.778158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.778184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.778294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.778322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.778462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.778489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.778652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.778678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.778814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.778841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.778983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.779011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.779162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.779189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.779331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.779358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.779470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.779496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.779628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.779654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.779771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.779798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.779937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.779964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.780104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.780131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.780245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.780271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.780416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.780443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.780611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.780647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.780810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.780837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.780944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.780977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.781114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.781148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.781259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.781286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.781425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.781451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.781574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.781600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.781774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.781800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.781908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.781936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.782077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.782105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.782213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.782239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.782373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.782399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.782563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.782589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.782728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.782755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.782863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.782892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.783027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.783054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.783190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.783217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.783333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.783360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.783500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.783526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.783672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.783699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.783828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.783854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.783966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.783993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.784156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.784183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.784292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.784318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.784452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.784480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.784612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.784639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.784743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.784770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.784917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.784943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.785088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.785115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.785247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.785273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.785432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.785472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.785650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.785677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.785780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.785806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.785926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.785952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.786109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.786136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.786269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.786295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.786435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.786460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.786623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.786648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.786790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.786818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.786938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.786966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.787104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.787144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.787265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.787292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.787436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.787463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.787634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.787661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.787799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.787825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.787959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.520 [2024-07-26 01:16:44.787986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.520 qpair failed and we were unable to recover it. 00:34:14.520 [2024-07-26 01:16:44.788159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.788186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.788296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.788323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.788453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.788479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.788637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.788662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.788828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.788854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.789022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.789051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.789201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.789228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.789341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.789368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.789510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.789537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.789674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.789701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.789812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.789839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.789975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.790003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.790135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.790176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.790296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.790324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.790466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.790494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.790637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.790664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.790779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.790806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.790974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.791001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.791109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.791137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.791267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.791293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.791436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.791464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.791569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.791596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.791708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.791735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.791866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.791892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.792030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.792068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.792203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.792230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.792375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.792402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.792541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.792568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.792709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.792736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.792870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.792896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.793030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.793057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.793197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.793224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.793325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.793352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.793516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.793543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.793681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.793707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.793816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.793842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.793959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.793987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.794137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.794176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.794301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.794331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.794473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.794500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.794659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.794686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.794799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.794826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.794961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.521 [2024-07-26 01:16:44.794988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.521 qpair failed and we were unable to recover it. 00:34:14.521 [2024-07-26 01:16:44.795099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.523 [2024-07-26 01:16:44.795127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.523 qpair failed and we were unable to recover it. 00:34:14.523 [2024-07-26 01:16:44.795267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.523 [2024-07-26 01:16:44.795293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.523 qpair failed and we were unable to recover it. 00:34:14.523 [2024-07-26 01:16:44.795429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.523 [2024-07-26 01:16:44.795456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.523 qpair failed and we were unable to recover it. 00:34:14.523 [2024-07-26 01:16:44.795619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.523 [2024-07-26 01:16:44.795645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.523 qpair failed and we were unable to recover it. 00:34:14.523 [2024-07-26 01:16:44.795782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.523 [2024-07-26 01:16:44.795808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.523 qpair failed and we were unable to recover it. 00:34:14.523 [2024-07-26 01:16:44.795932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.523 [2024-07-26 01:16:44.795958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.523 qpair failed and we were unable to recover it. 00:34:14.523 [2024-07-26 01:16:44.796094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.523 [2024-07-26 01:16:44.796122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.523 qpair failed and we were unable to recover it. 00:34:14.523 [2024-07-26 01:16:44.796233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.523 [2024-07-26 01:16:44.796260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.523 qpair failed and we were unable to recover it. 00:34:14.523 [2024-07-26 01:16:44.796419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.523 [2024-07-26 01:16:44.796452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.523 qpair failed and we were unable to recover it. 00:34:14.523 [2024-07-26 01:16:44.796556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.523 [2024-07-26 01:16:44.796582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.523 qpair failed and we were unable to recover it. 00:34:14.524 [2024-07-26 01:16:44.796718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.524 [2024-07-26 01:16:44.796744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.524 qpair failed and we were unable to recover it. 00:34:14.524 [2024-07-26 01:16:44.796863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.524 [2024-07-26 01:16:44.796889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.524 qpair failed and we were unable to recover it. 00:34:14.524 [2024-07-26 01:16:44.797024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.524 [2024-07-26 01:16:44.797050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.524 qpair failed and we were unable to recover it. 00:34:14.524 [2024-07-26 01:16:44.797223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.524 [2024-07-26 01:16:44.797250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.524 qpair failed and we were unable to recover it. 00:34:14.524 [2024-07-26 01:16:44.797410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.524 [2024-07-26 01:16:44.797436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.524 qpair failed and we were unable to recover it. 00:34:14.524 [2024-07-26 01:16:44.797572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.524 [2024-07-26 01:16:44.797598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.524 qpair failed and we were unable to recover it. 00:34:14.524 [2024-07-26 01:16:44.797736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.524 [2024-07-26 01:16:44.797762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.524 qpair failed and we were unable to recover it. 00:34:14.524 [2024-07-26 01:16:44.797896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.524 [2024-07-26 01:16:44.797923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.524 qpair failed and we were unable to recover it. 00:34:14.524 [2024-07-26 01:16:44.798067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.525 [2024-07-26 01:16:44.798094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.525 qpair failed and we were unable to recover it. 00:34:14.525 [2024-07-26 01:16:44.798255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.525 [2024-07-26 01:16:44.798281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.525 qpair failed and we were unable to recover it. 00:34:14.525 [2024-07-26 01:16:44.798416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.525 [2024-07-26 01:16:44.798442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.525 qpair failed and we were unable to recover it. 00:34:14.525 [2024-07-26 01:16:44.798588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.525 [2024-07-26 01:16:44.798614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.525 qpair failed and we were unable to recover it. 00:34:14.525 [2024-07-26 01:16:44.798756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.525 [2024-07-26 01:16:44.798783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.525 qpair failed and we were unable to recover it. 00:34:14.525 [2024-07-26 01:16:44.798897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.525 [2024-07-26 01:16:44.798924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.525 qpair failed and we were unable to recover it. 00:34:14.525 [2024-07-26 01:16:44.799065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.525 [2024-07-26 01:16:44.799092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.525 qpair failed and we were unable to recover it. 00:34:14.525 [2024-07-26 01:16:44.799209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.525 [2024-07-26 01:16:44.799236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.525 qpair failed and we were unable to recover it. 00:34:14.525 [2024-07-26 01:16:44.799366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.525 [2024-07-26 01:16:44.799392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.525 qpair failed and we were unable to recover it. 00:34:14.525 [2024-07-26 01:16:44.799553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.525 [2024-07-26 01:16:44.799579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.525 qpair failed and we were unable to recover it. 00:34:14.525 [2024-07-26 01:16:44.799717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.525 [2024-07-26 01:16:44.799743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.525 qpair failed and we were unable to recover it. 00:34:14.525 [2024-07-26 01:16:44.799879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.525 [2024-07-26 01:16:44.799906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.525 qpair failed and we were unable to recover it. 00:34:14.525 [2024-07-26 01:16:44.800041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.525 [2024-07-26 01:16:44.800074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.525 qpair failed and we were unable to recover it. 00:34:14.525 [2024-07-26 01:16:44.800195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.525 [2024-07-26 01:16:44.800222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.525 qpair failed and we were unable to recover it. 00:34:14.525 [2024-07-26 01:16:44.800351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.525 [2024-07-26 01:16:44.800377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.525 qpair failed and we were unable to recover it. 00:34:14.525 [2024-07-26 01:16:44.800482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.525 [2024-07-26 01:16:44.800508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.525 qpair failed and we were unable to recover it. 00:34:14.525 [2024-07-26 01:16:44.800621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.525 [2024-07-26 01:16:44.800647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.525 qpair failed and we were unable to recover it. 00:34:14.525 [2024-07-26 01:16:44.800767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.525 [2024-07-26 01:16:44.800793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.525 qpair failed and we were unable to recover it. 00:34:14.525 [2024-07-26 01:16:44.800930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.525 [2024-07-26 01:16:44.800957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.525 qpair failed and we were unable to recover it. 00:34:14.525 [2024-07-26 01:16:44.801072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.525 [2024-07-26 01:16:44.801099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.525 qpair failed and we were unable to recover it. 00:34:14.525 [2024-07-26 01:16:44.801216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.525 [2024-07-26 01:16:44.801242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.525 qpair failed and we were unable to recover it. 00:34:14.525 [2024-07-26 01:16:44.801356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.525 [2024-07-26 01:16:44.801383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.525 qpair failed and we were unable to recover it. 00:34:14.525 [2024-07-26 01:16:44.801520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.525 [2024-07-26 01:16:44.801546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.525 qpair failed and we were unable to recover it. 00:34:14.525 [2024-07-26 01:16:44.801655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.525 [2024-07-26 01:16:44.801681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.525 qpair failed and we were unable to recover it. 00:34:14.525 [2024-07-26 01:16:44.801818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.525 [2024-07-26 01:16:44.801844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.525 qpair failed and we were unable to recover it. 00:34:14.525 [2024-07-26 01:16:44.801959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.525 [2024-07-26 01:16:44.801986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.525 qpair failed and we were unable to recover it. 00:34:14.525 [2024-07-26 01:16:44.802145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.525 [2024-07-26 01:16:44.802185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.525 qpair failed and we were unable to recover it. 00:34:14.525 [2024-07-26 01:16:44.802329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.525 [2024-07-26 01:16:44.802355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.525 qpair failed and we were unable to recover it. 00:34:14.525 [2024-07-26 01:16:44.802464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.525 [2024-07-26 01:16:44.802491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.525 qpair failed and we were unable to recover it. 00:34:14.525 [2024-07-26 01:16:44.802630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.525 [2024-07-26 01:16:44.802656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.525 qpair failed and we were unable to recover it. 00:34:14.525 [2024-07-26 01:16:44.802755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.525 [2024-07-26 01:16:44.802789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.525 qpair failed and we were unable to recover it. 00:34:14.525 [2024-07-26 01:16:44.802917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.525 [2024-07-26 01:16:44.802944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.525 qpair failed and we were unable to recover it. 00:34:14.525 [2024-07-26 01:16:44.803053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.525 [2024-07-26 01:16:44.803090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.525 qpair failed and we were unable to recover it. 00:34:14.525 [2024-07-26 01:16:44.803233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.525 [2024-07-26 01:16:44.803259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.525 qpair failed and we were unable to recover it. 00:34:14.525 [2024-07-26 01:16:44.803424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.525 [2024-07-26 01:16:44.803449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.525 qpair failed and we were unable to recover it. 00:34:14.525 [2024-07-26 01:16:44.803559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.525 [2024-07-26 01:16:44.803585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.525 qpair failed and we were unable to recover it. 00:34:14.525 [2024-07-26 01:16:44.803721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.525 [2024-07-26 01:16:44.803747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.525 qpair failed and we were unable to recover it. 00:34:14.525 [2024-07-26 01:16:44.803884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.525 [2024-07-26 01:16:44.803910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.525 qpair failed and we were unable to recover it. 00:34:14.525 [2024-07-26 01:16:44.804054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.525 [2024-07-26 01:16:44.804088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.525 qpair failed and we were unable to recover it. 00:34:14.525 [2024-07-26 01:16:44.804238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.525 [2024-07-26 01:16:44.804265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.525 qpair failed and we were unable to recover it. 00:34:14.525 [2024-07-26 01:16:44.804390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.525 [2024-07-26 01:16:44.804416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.525 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.804552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.526 [2024-07-26 01:16:44.804579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.526 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.804742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.526 [2024-07-26 01:16:44.804768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.526 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.804930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.526 [2024-07-26 01:16:44.804956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.526 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.805105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.526 [2024-07-26 01:16:44.805132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.526 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.805264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.526 [2024-07-26 01:16:44.805291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.526 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.805452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.526 [2024-07-26 01:16:44.805478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.526 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.805614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.526 [2024-07-26 01:16:44.805640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.526 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.805752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.526 [2024-07-26 01:16:44.805778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.526 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.805884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.526 [2024-07-26 01:16:44.805910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.526 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.806015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.526 [2024-07-26 01:16:44.806042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.526 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.806165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.526 [2024-07-26 01:16:44.806193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.526 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.806298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.526 [2024-07-26 01:16:44.806325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.526 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.806465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.526 [2024-07-26 01:16:44.806491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.526 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.806604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.526 [2024-07-26 01:16:44.806630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.526 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.806730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.526 [2024-07-26 01:16:44.806756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.526 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.806922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.526 [2024-07-26 01:16:44.806949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.526 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.807102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.526 [2024-07-26 01:16:44.807148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.526 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.807278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.526 [2024-07-26 01:16:44.807306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.526 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.807472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.526 [2024-07-26 01:16:44.807499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.526 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.807637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.526 [2024-07-26 01:16:44.807664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.526 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.807775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.526 [2024-07-26 01:16:44.807801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.526 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.807964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.526 [2024-07-26 01:16:44.807990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.526 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.808102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.526 [2024-07-26 01:16:44.808130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.526 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.808294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.526 [2024-07-26 01:16:44.808320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.526 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.808462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.526 [2024-07-26 01:16:44.808488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.526 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.808601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.526 [2024-07-26 01:16:44.808627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.526 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.808753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.526 [2024-07-26 01:16:44.808779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.526 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.808908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.526 [2024-07-26 01:16:44.808933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.526 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.809079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.526 [2024-07-26 01:16:44.809108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.526 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.809247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.526 [2024-07-26 01:16:44.809274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.526 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.809410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.526 [2024-07-26 01:16:44.809437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.526 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.809544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.526 [2024-07-26 01:16:44.809570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.526 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.809710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.526 [2024-07-26 01:16:44.809738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.526 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.809876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.526 [2024-07-26 01:16:44.809904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.526 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.810086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.526 [2024-07-26 01:16:44.810114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.526 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.810249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.526 [2024-07-26 01:16:44.810275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.526 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.810435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.526 [2024-07-26 01:16:44.810461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.526 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.810569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.526 [2024-07-26 01:16:44.810596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.526 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.810761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.526 [2024-07-26 01:16:44.810787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.526 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.810900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.526 [2024-07-26 01:16:44.810926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.526 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.811077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.526 [2024-07-26 01:16:44.811105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.526 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.811242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.526 [2024-07-26 01:16:44.811269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.526 qpair failed and we were unable to recover it. 00:34:14.526 [2024-07-26 01:16:44.811371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.811397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.811559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.811590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.811705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.811733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.811865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.811892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.812047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.812094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.812206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.812233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.812393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.812420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.812527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.812554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.812687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.812713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.812820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.812847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.812969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.812996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.813102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.813129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.813235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.813261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.813374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.813400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.813531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.813557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.813696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.813723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.813856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.813889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.813994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.814021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.814135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.814161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.814290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.814316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.814426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.814452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.814556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.814582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.814721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.814756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.814926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.814952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.815089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.815116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.815220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.815246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.815360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.815386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.815492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.815518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.815698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.815728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.815842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.815869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.816026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.816071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.816191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.816218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.816338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.816365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.816527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.816553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.816673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.816699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.816844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.816870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.816984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.817011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.817150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.817178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.817339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.817366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.817480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.817505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.817668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.817694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.817827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.817852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.817993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.818021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.818163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.527 [2024-07-26 01:16:44.818190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.527 qpair failed and we were unable to recover it. 00:34:14.527 [2024-07-26 01:16:44.818327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.528 [2024-07-26 01:16:44.818353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.528 qpair failed and we were unable to recover it. 00:34:14.528 [2024-07-26 01:16:44.818469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.528 [2024-07-26 01:16:44.818495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.528 qpair failed and we were unable to recover it. 00:34:14.528 [2024-07-26 01:16:44.818600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.528 [2024-07-26 01:16:44.818626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.528 qpair failed and we were unable to recover it. 00:34:14.528 [2024-07-26 01:16:44.818733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.528 [2024-07-26 01:16:44.818759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.528 qpair failed and we were unable to recover it. 00:34:14.528 [2024-07-26 01:16:44.818894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.528 [2024-07-26 01:16:44.818920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.528 qpair failed and we were unable to recover it. 00:34:14.528 [2024-07-26 01:16:44.819080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.528 [2024-07-26 01:16:44.819121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.528 qpair failed and we were unable to recover it. 00:34:14.528 [2024-07-26 01:16:44.819291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.528 [2024-07-26 01:16:44.819318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.528 qpair failed and we were unable to recover it. 00:34:14.528 [2024-07-26 01:16:44.819461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.528 [2024-07-26 01:16:44.819489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.528 qpair failed and we were unable to recover it. 00:34:14.528 [2024-07-26 01:16:44.819633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.528 [2024-07-26 01:16:44.819659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.528 qpair failed and we were unable to recover it. 00:34:14.528 [2024-07-26 01:16:44.819795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.528 [2024-07-26 01:16:44.819823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.528 qpair failed and we were unable to recover it. 00:34:14.528 [2024-07-26 01:16:44.819988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.528 [2024-07-26 01:16:44.820014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.528 qpair failed and we were unable to recover it. 00:34:14.528 [2024-07-26 01:16:44.820196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.528 [2024-07-26 01:16:44.820229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.528 qpair failed and we were unable to recover it. 00:34:14.528 [2024-07-26 01:16:44.820367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.528 [2024-07-26 01:16:44.820394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.528 qpair failed and we were unable to recover it. 00:34:14.528 [2024-07-26 01:16:44.820532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.528 [2024-07-26 01:16:44.820560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.528 qpair failed and we were unable to recover it. 00:34:14.528 [2024-07-26 01:16:44.820676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.528 [2024-07-26 01:16:44.820703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.528 qpair failed and we were unable to recover it. 00:34:14.528 [2024-07-26 01:16:44.820844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.528 [2024-07-26 01:16:44.820871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.528 qpair failed and we were unable to recover it. 00:34:14.528 [2024-07-26 01:16:44.821011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.528 [2024-07-26 01:16:44.821037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.528 qpair failed and we were unable to recover it. 00:34:14.528 [2024-07-26 01:16:44.821196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.528 [2024-07-26 01:16:44.821236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.528 qpair failed and we were unable to recover it. 00:34:14.528 [2024-07-26 01:16:44.821403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.528 [2024-07-26 01:16:44.821430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.528 qpair failed and we were unable to recover it. 00:34:14.528 [2024-07-26 01:16:44.821563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.528 [2024-07-26 01:16:44.821590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.528 qpair failed and we were unable to recover it. 00:34:14.528 [2024-07-26 01:16:44.821700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.528 [2024-07-26 01:16:44.821726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.528 qpair failed and we were unable to recover it. 00:34:14.528 [2024-07-26 01:16:44.821838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.821866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.822002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.822028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.822197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.822224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.822332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.822358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.822495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.822522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.822634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.822661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.822792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.822819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.822931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.822957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.823070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.823098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.823256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.823282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.823416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.823442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.823546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.823572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.823687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.823713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.823839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.823865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.824010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.824036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.824180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.824207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.824370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.824397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.824525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.824551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.824661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.824688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.824803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.824830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.824991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.825017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.825135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.825163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.825304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.825330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.825433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.825460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.825589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.825615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.825745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.825772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.825913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.825940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.826043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.826077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.826183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.826209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.826318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.826345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.826446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.826471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.826638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.826664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.826801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.826826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.826986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.827012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.827123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.827150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.827279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.827305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.827468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.827494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.827626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.827652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.827796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.827822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.827926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.827952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.828092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.828119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.828231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.828259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.828394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.828420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.828525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.828551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.828682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.828708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.828828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.828854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.828991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.829016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.529 [2024-07-26 01:16:44.829169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.529 [2024-07-26 01:16:44.829196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.529 qpair failed and we were unable to recover it. 00:34:14.530 [2024-07-26 01:16:44.829295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.530 [2024-07-26 01:16:44.829322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.530 qpair failed and we were unable to recover it. 00:34:14.530 [2024-07-26 01:16:44.829462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.530 [2024-07-26 01:16:44.829488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.530 qpair failed and we were unable to recover it. 00:34:14.530 [2024-07-26 01:16:44.829643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.530 [2024-07-26 01:16:44.829669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.530 qpair failed and we were unable to recover it. 00:34:14.530 [2024-07-26 01:16:44.829792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.530 [2024-07-26 01:16:44.829817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.530 qpair failed and we were unable to recover it. 00:34:14.530 [2024-07-26 01:16:44.829975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.530 [2024-07-26 01:16:44.830001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.530 qpair failed and we were unable to recover it. 00:34:14.530 [2024-07-26 01:16:44.830153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.530 [2024-07-26 01:16:44.830181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.530 qpair failed and we were unable to recover it. 00:34:14.530 [2024-07-26 01:16:44.830289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.530 [2024-07-26 01:16:44.830315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.530 qpair failed and we were unable to recover it. 00:34:14.530 [2024-07-26 01:16:44.830451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.530 [2024-07-26 01:16:44.830477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.530 qpair failed and we were unable to recover it. 00:34:14.530 [2024-07-26 01:16:44.830587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.530 [2024-07-26 01:16:44.830614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.530 qpair failed and we were unable to recover it. 00:34:14.530 [2024-07-26 01:16:44.830747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.530 [2024-07-26 01:16:44.830773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.530 qpair failed and we were unable to recover it. 00:34:14.530 [2024-07-26 01:16:44.830911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.530 [2024-07-26 01:16:44.830941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.530 qpair failed and we were unable to recover it. 00:34:14.530 [2024-07-26 01:16:44.831069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.530 [2024-07-26 01:16:44.831095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.530 qpair failed and we were unable to recover it. 00:34:14.530 [2024-07-26 01:16:44.831200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.530 [2024-07-26 01:16:44.831227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.530 qpair failed and we were unable to recover it. 00:34:14.530 [2024-07-26 01:16:44.831328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.530 [2024-07-26 01:16:44.831354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.530 qpair failed and we were unable to recover it. 00:34:14.530 [2024-07-26 01:16:44.831502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.530 [2024-07-26 01:16:44.831529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.530 qpair failed and we were unable to recover it. 00:34:14.530 [2024-07-26 01:16:44.831669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.530 [2024-07-26 01:16:44.831695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.530 qpair failed and we were unable to recover it. 00:34:14.530 [2024-07-26 01:16:44.831812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.530 [2024-07-26 01:16:44.831838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.530 qpair failed and we were unable to recover it. 00:34:14.530 [2024-07-26 01:16:44.831987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.530 [2024-07-26 01:16:44.832013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.530 qpair failed and we were unable to recover it. 00:34:14.530 [2024-07-26 01:16:44.832125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.530 [2024-07-26 01:16:44.832152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.530 qpair failed and we were unable to recover it. 00:34:14.530 [2024-07-26 01:16:44.832314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.530 [2024-07-26 01:16:44.832340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.530 qpair failed and we were unable to recover it. 00:34:14.530 [2024-07-26 01:16:44.832471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.530 [2024-07-26 01:16:44.832498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.530 qpair failed and we were unable to recover it. 00:34:14.530 [2024-07-26 01:16:44.832657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.530 [2024-07-26 01:16:44.832683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.530 qpair failed and we were unable to recover it. 00:34:14.530 [2024-07-26 01:16:44.832821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.530 [2024-07-26 01:16:44.832847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.530 qpair failed and we were unable to recover it. 00:34:14.530 [2024-07-26 01:16:44.832991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.530 [2024-07-26 01:16:44.833016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.530 qpair failed and we were unable to recover it. 00:34:14.530 [2024-07-26 01:16:44.833181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.530 [2024-07-26 01:16:44.833208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.530 qpair failed and we were unable to recover it. 00:34:14.530 [2024-07-26 01:16:44.833317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.530 [2024-07-26 01:16:44.833343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.530 qpair failed and we were unable to recover it. 00:34:14.530 [2024-07-26 01:16:44.833505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.530 [2024-07-26 01:16:44.833532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.530 qpair failed and we were unable to recover it. 00:34:14.530 [2024-07-26 01:16:44.833661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.530 [2024-07-26 01:16:44.833687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.530 qpair failed and we were unable to recover it. 00:34:14.530 [2024-07-26 01:16:44.833801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.530 [2024-07-26 01:16:44.833828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.530 qpair failed and we were unable to recover it. 00:34:14.530 [2024-07-26 01:16:44.833961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.530 [2024-07-26 01:16:44.833987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.530 qpair failed and we were unable to recover it. 00:34:14.530 [2024-07-26 01:16:44.834123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.530 [2024-07-26 01:16:44.834149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.530 qpair failed and we were unable to recover it. 00:34:14.530 [2024-07-26 01:16:44.834259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.530 [2024-07-26 01:16:44.834285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.530 qpair failed and we were unable to recover it. 00:34:14.530 [2024-07-26 01:16:44.834414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.530 [2024-07-26 01:16:44.834440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.530 qpair failed and we were unable to recover it. 00:34:14.530 [2024-07-26 01:16:44.834558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.530 [2024-07-26 01:16:44.834585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.530 qpair failed and we were unable to recover it. 00:34:14.530 [2024-07-26 01:16:44.834688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.530 [2024-07-26 01:16:44.834714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.530 qpair failed and we were unable to recover it. 00:34:14.530 [2024-07-26 01:16:44.834874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.530 [2024-07-26 01:16:44.834900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.530 qpair failed and we were unable to recover it. 00:34:14.530 [2024-07-26 01:16:44.835041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.530 [2024-07-26 01:16:44.835073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.530 qpair failed and we were unable to recover it. 00:34:14.530 [2024-07-26 01:16:44.835211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.530 [2024-07-26 01:16:44.835238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.530 qpair failed and we were unable to recover it. 00:34:14.530 [2024-07-26 01:16:44.835356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.530 [2024-07-26 01:16:44.835382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.530 qpair failed and we were unable to recover it. 00:34:14.530 [2024-07-26 01:16:44.835515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.530 [2024-07-26 01:16:44.835541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.530 qpair failed and we were unable to recover it. 00:34:14.530 [2024-07-26 01:16:44.835672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.530 [2024-07-26 01:16:44.835698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.530 qpair failed and we were unable to recover it. 00:34:14.530 [2024-07-26 01:16:44.835840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.530 [2024-07-26 01:16:44.835875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.530 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.836042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.836074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.836209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.836235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.836373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.836399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.836538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.836564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.836691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.836717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.836872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.836898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.837065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.837092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.837224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.837250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.837366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.837392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.837522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.837552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.837693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.837718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.837838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.837864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.837999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.838026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.838141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.838167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.838305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.838331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.838470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.838496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.838603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.838628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.838767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.838793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.838930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.838956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.839099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.839126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.839264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.839290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.839420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.839446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.839576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.839603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.839766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.839793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.839925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.839951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.840069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.840095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.840236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.840263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.840394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.840420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.840555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.840580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.840693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.840720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.840862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.840888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.840997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.841023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.841159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.841185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.841325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.841351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.841454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.841480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.841617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.841643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.841785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.841815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.841965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.841991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.842127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.842155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.842294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.842320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.842458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.842484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.842643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.842669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.842808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.842835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.842941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.842967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.843107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.843134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.843246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.843272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.531 [2024-07-26 01:16:44.843402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.531 [2024-07-26 01:16:44.843428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.531 qpair failed and we were unable to recover it. 00:34:14.532 [2024-07-26 01:16:44.843531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.843557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 [2024-07-26 01:16:44.843689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.843716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 [2024-07-26 01:16:44.843848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.843874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 [2024-07-26 01:16:44.843990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.844017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 [2024-07-26 01:16:44.844193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.844219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 [2024-07-26 01:16:44.844327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.844353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 [2024-07-26 01:16:44.844492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.844518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 [2024-07-26 01:16:44.844651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.844678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 [2024-07-26 01:16:44.844779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.844805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 [2024-07-26 01:16:44.844940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.844966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 [2024-07-26 01:16:44.845082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.845109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 [2024-07-26 01:16:44.845218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.845244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 [2024-07-26 01:16:44.845344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.845371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 [2024-07-26 01:16:44.845534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.845561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1983977 Killed "${NVMF_APP[@]}" "$@" 00:34:14.532 [2024-07-26 01:16:44.845672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.845699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 [2024-07-26 01:16:44.845822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.845848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 [2024-07-26 01:16:44.845957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.845984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 01:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:34:14.532 [2024-07-26 01:16:44.846120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.846148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 [2024-07-26 01:16:44.846254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 01:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:14.532 [2024-07-26 01:16:44.846280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 [2024-07-26 01:16:44.846409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 01:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:14.532 [2024-07-26 01:16:44.846435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 01:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:14.532 [2024-07-26 01:16:44.846573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.846602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 01:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:14.532 [2024-07-26 01:16:44.846765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.846792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 [2024-07-26 01:16:44.846935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.846961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 [2024-07-26 01:16:44.847104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.847131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 [2024-07-26 01:16:44.847242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.847269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 [2024-07-26 01:16:44.847407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.847433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 [2024-07-26 01:16:44.847593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.847619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 [2024-07-26 01:16:44.847754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.847784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 [2024-07-26 01:16:44.847922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.847948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 [2024-07-26 01:16:44.848091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.848119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 [2024-07-26 01:16:44.848254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.848281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 [2024-07-26 01:16:44.848418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.848444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 [2024-07-26 01:16:44.848580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.848606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 [2024-07-26 01:16:44.848738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.848765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 [2024-07-26 01:16:44.848868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.848894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 [2024-07-26 01:16:44.849054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.849087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 [2024-07-26 01:16:44.849194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.849220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 [2024-07-26 01:16:44.849358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.849384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 [2024-07-26 01:16:44.849494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.849520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 [2024-07-26 01:16:44.849659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.849687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 [2024-07-26 01:16:44.849826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.849852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 [2024-07-26 01:16:44.849992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.850018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 01:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1984507 00:34:14.532 01:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:14.532 01:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1984507 00:34:14.532 [2024-07-26 01:16:44.850816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.850846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 01:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1984507 ']' 00:34:14.532 [2024-07-26 01:16:44.851018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.851047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 01:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:14.532 [2024-07-26 01:16:44.851192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.851220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 01:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:14.532 [2024-07-26 01:16:44.851354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.851382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 01:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:14.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:14.532 [2024-07-26 01:16:44.851520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.851547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 01:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:14.532 [2024-07-26 01:16:44.851688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.851715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 01:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:14.532 [2024-07-26 01:16:44.851820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.532 [2024-07-26 01:16:44.851847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.532 qpair failed and we were unable to recover it. 00:34:14.532 [2024-07-26 01:16:44.852008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.533 [2024-07-26 01:16:44.852035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.533 qpair failed and we were unable to recover it. 00:34:14.533 [2024-07-26 01:16:44.852186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.533 [2024-07-26 01:16:44.852213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.533 qpair failed and we were unable to recover it. 00:34:14.533 [2024-07-26 01:16:44.852352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.533 [2024-07-26 01:16:44.852379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.533 qpair failed and we were unable to recover it. 00:34:14.533 [2024-07-26 01:16:44.852512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.533 [2024-07-26 01:16:44.852538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.533 qpair failed and we were unable to recover it. 00:34:14.533 [2024-07-26 01:16:44.852669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.533 [2024-07-26 01:16:44.852696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.533 qpair failed and we were unable to recover it. 00:34:14.533 [2024-07-26 01:16:44.852827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.533 [2024-07-26 01:16:44.852854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.533 qpair failed and we were unable to recover it. 00:34:14.533 [2024-07-26 01:16:44.852973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.533 [2024-07-26 01:16:44.852999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.533 qpair failed and we were unable to recover it. 00:34:14.533 [2024-07-26 01:16:44.853114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.533 [2024-07-26 01:16:44.853140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.533 qpair failed and we were unable to recover it. 00:34:14.533 [2024-07-26 01:16:44.853255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.817 [2024-07-26 01:16:44.853281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.817 qpair failed and we were unable to recover it. 00:34:14.817 [2024-07-26 01:16:44.853415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.817 [2024-07-26 01:16:44.853440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.817 qpair failed and we were unable to recover it. 00:34:14.817 [2024-07-26 01:16:44.853604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.817 [2024-07-26 01:16:44.853630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.817 qpair failed and we were unable to recover it. 00:34:14.817 [2024-07-26 01:16:44.853762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.817 [2024-07-26 01:16:44.853789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.817 qpair failed and we were unable to recover it. 00:34:14.818 [2024-07-26 01:16:44.853952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.818 [2024-07-26 01:16:44.853979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.818 qpair failed and we were unable to recover it. 00:34:14.818 [2024-07-26 01:16:44.854082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.818 [2024-07-26 01:16:44.854120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.818 qpair failed and we were unable to recover it. 00:34:14.818 [2024-07-26 01:16:44.854221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.818 [2024-07-26 01:16:44.854252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.818 qpair failed and we were unable to recover it. 00:34:14.818 [2024-07-26 01:16:44.854361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.818 [2024-07-26 01:16:44.854388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.818 qpair failed and we were unable to recover it. 00:34:14.818 [2024-07-26 01:16:44.854591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.818 [2024-07-26 01:16:44.854624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.818 qpair failed and we were unable to recover it. 00:34:14.818 [2024-07-26 01:16:44.854853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.818 [2024-07-26 01:16:44.854882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.818 qpair failed and we were unable to recover it. 00:34:14.818 [2024-07-26 01:16:44.855093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.818 [2024-07-26 01:16:44.855120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.818 qpair failed and we were unable to recover it. 00:34:14.818 [2024-07-26 01:16:44.855239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.818 [2024-07-26 01:16:44.855266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.818 qpair failed and we were unable to recover it. 00:34:14.818 [2024-07-26 01:16:44.855431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.818 [2024-07-26 01:16:44.855458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.818 qpair failed and we were unable to recover it. 00:34:14.818 [2024-07-26 01:16:44.855569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.818 [2024-07-26 01:16:44.855596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.818 qpair failed and we were unable to recover it. 00:34:14.818 [2024-07-26 01:16:44.855708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.818 [2024-07-26 01:16:44.855735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.818 qpair failed and we were unable to recover it. 00:34:14.818 [2024-07-26 01:16:44.855895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.818 [2024-07-26 01:16:44.855921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.818 qpair failed and we were unable to recover it. 00:34:14.818 [2024-07-26 01:16:44.856075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.818 [2024-07-26 01:16:44.856103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.818 qpair failed and we were unable to recover it. 00:34:14.818 [2024-07-26 01:16:44.856203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.818 [2024-07-26 01:16:44.856230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.818 qpair failed and we were unable to recover it. 00:34:14.818 [2024-07-26 01:16:44.856338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.818 [2024-07-26 01:16:44.856366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.818 qpair failed and we were unable to recover it. 00:34:14.818 [2024-07-26 01:16:44.856470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.818 [2024-07-26 01:16:44.856497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.818 qpair failed and we were unable to recover it. 00:34:14.818 [2024-07-26 01:16:44.856609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.818 [2024-07-26 01:16:44.856635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.818 qpair failed and we were unable to recover it. 00:34:14.818 [2024-07-26 01:16:44.856780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.818 [2024-07-26 01:16:44.856806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.818 qpair failed and we were unable to recover it. 00:34:14.818 [2024-07-26 01:16:44.856940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.818 [2024-07-26 01:16:44.856968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.818 qpair failed and we were unable to recover it. 00:34:14.818 [2024-07-26 01:16:44.857077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.818 [2024-07-26 01:16:44.857114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.818 qpair failed and we were unable to recover it. 00:34:14.818 [2024-07-26 01:16:44.857243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.818 [2024-07-26 01:16:44.857270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.818 qpair failed and we were unable to recover it. 00:34:14.818 [2024-07-26 01:16:44.857380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.818 [2024-07-26 01:16:44.857407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.818 qpair failed and we were unable to recover it. 00:34:14.818 [2024-07-26 01:16:44.857519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.818 [2024-07-26 01:16:44.857546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.818 qpair failed and we were unable to recover it. 00:34:14.818 [2024-07-26 01:16:44.857652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.818 [2024-07-26 01:16:44.857679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.818 qpair failed and we were unable to recover it. 00:34:14.818 [2024-07-26 01:16:44.857784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.818 [2024-07-26 01:16:44.857811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.818 qpair failed and we were unable to recover it. 00:34:14.818 [2024-07-26 01:16:44.857940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.818 [2024-07-26 01:16:44.857967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.818 qpair failed and we were unable to recover it. 00:34:14.818 [2024-07-26 01:16:44.858099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.818 [2024-07-26 01:16:44.858126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.818 qpair failed and we were unable to recover it. 00:34:14.818 [2024-07-26 01:16:44.858236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.818 [2024-07-26 01:16:44.858263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.818 qpair failed and we were unable to recover it. 00:34:14.818 [2024-07-26 01:16:44.858369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.818 [2024-07-26 01:16:44.858395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.818 qpair failed and we were unable to recover it. 00:34:14.818 [2024-07-26 01:16:44.858508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.818 [2024-07-26 01:16:44.858535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.818 qpair failed and we were unable to recover it. 00:34:14.818 [2024-07-26 01:16:44.858658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.818 [2024-07-26 01:16:44.858685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.818 qpair failed and we were unable to recover it. 00:34:14.818 [2024-07-26 01:16:44.858818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.818 [2024-07-26 01:16:44.858845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.818 qpair failed and we were unable to recover it. 00:34:14.818 [2024-07-26 01:16:44.859003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.818 [2024-07-26 01:16:44.859029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.818 qpair failed and we were unable to recover it. 00:34:14.818 [2024-07-26 01:16:44.859146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.818 [2024-07-26 01:16:44.859173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.818 qpair failed and we were unable to recover it. 00:34:14.818 [2024-07-26 01:16:44.859283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.818 [2024-07-26 01:16:44.859310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.818 qpair failed and we were unable to recover it. 00:34:14.818 [2024-07-26 01:16:44.859427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.818 [2024-07-26 01:16:44.859453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.818 qpair failed and we were unable to recover it. 00:34:14.818 [2024-07-26 01:16:44.859566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.818 [2024-07-26 01:16:44.859594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.818 qpair failed and we were unable to recover it. 00:34:14.818 [2024-07-26 01:16:44.859708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.819 [2024-07-26 01:16:44.859734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.819 qpair failed and we were unable to recover it. 00:34:14.819 [2024-07-26 01:16:44.859847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.819 [2024-07-26 01:16:44.859874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.819 qpair failed and we were unable to recover it. 00:34:14.819 [2024-07-26 01:16:44.859973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.819 [2024-07-26 01:16:44.860000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.819 qpair failed and we were unable to recover it. 00:34:14.819 [2024-07-26 01:16:44.860143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.819 [2024-07-26 01:16:44.860169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.819 qpair failed and we were unable to recover it. 00:34:14.819 [2024-07-26 01:16:44.860281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.819 [2024-07-26 01:16:44.860307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.819 qpair failed and we were unable to recover it. 00:34:14.819 [2024-07-26 01:16:44.860448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.819 [2024-07-26 01:16:44.860475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.819 qpair failed and we were unable to recover it. 00:34:14.819 [2024-07-26 01:16:44.860625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.819 [2024-07-26 01:16:44.860655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.819 qpair failed and we were unable to recover it. 00:34:14.819 [2024-07-26 01:16:44.860760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.819 [2024-07-26 01:16:44.860786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.819 qpair failed and we were unable to recover it. 00:34:14.819 [2024-07-26 01:16:44.860891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.819 [2024-07-26 01:16:44.860917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.819 qpair failed and we were unable to recover it. 00:34:14.819 [2024-07-26 01:16:44.861085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.819 [2024-07-26 01:16:44.861111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.819 qpair failed and we were unable to recover it. 00:34:14.819 [2024-07-26 01:16:44.861220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.819 [2024-07-26 01:16:44.861246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.819 qpair failed and we were unable to recover it. 00:34:14.819 [2024-07-26 01:16:44.861357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.819 [2024-07-26 01:16:44.861383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.819 qpair failed and we were unable to recover it. 00:34:14.819 [2024-07-26 01:16:44.861522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.819 [2024-07-26 01:16:44.861548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.819 qpair failed and we were unable to recover it. 00:34:14.819 [2024-07-26 01:16:44.861661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.819 [2024-07-26 01:16:44.861688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.819 qpair failed and we were unable to recover it. 00:34:14.819 [2024-07-26 01:16:44.861820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.819 [2024-07-26 01:16:44.861846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.819 qpair failed and we were unable to recover it. 00:34:14.819 [2024-07-26 01:16:44.861983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.819 [2024-07-26 01:16:44.862010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.819 qpair failed and we were unable to recover it. 00:34:14.819 [2024-07-26 01:16:44.862127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.819 [2024-07-26 01:16:44.862154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.819 qpair failed and we were unable to recover it. 00:34:14.819 [2024-07-26 01:16:44.862293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.819 [2024-07-26 01:16:44.862319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.819 qpair failed and we were unable to recover it. 00:34:14.819 [2024-07-26 01:16:44.862455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.819 [2024-07-26 01:16:44.862482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.819 qpair failed and we were unable to recover it. 00:34:14.819 [2024-07-26 01:16:44.862605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.819 [2024-07-26 01:16:44.862632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.819 qpair failed and we were unable to recover it. 00:34:14.819 [2024-07-26 01:16:44.862795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.819 [2024-07-26 01:16:44.862822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.819 qpair failed and we were unable to recover it. 00:34:14.819 [2024-07-26 01:16:44.862923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.819 [2024-07-26 01:16:44.862949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.819 qpair failed and we were unable to recover it. 00:34:14.819 [2024-07-26 01:16:44.863090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.819 [2024-07-26 01:16:44.863117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.819 qpair failed and we were unable to recover it. 00:34:14.819 [2024-07-26 01:16:44.863221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.819 [2024-07-26 01:16:44.863248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.819 qpair failed and we were unable to recover it. 00:34:14.819 [2024-07-26 01:16:44.863360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.819 [2024-07-26 01:16:44.863386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.819 qpair failed and we were unable to recover it. 00:34:14.819 [2024-07-26 01:16:44.864108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.819 [2024-07-26 01:16:44.864138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.819 qpair failed and we were unable to recover it. 00:34:14.819 [2024-07-26 01:16:44.864293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.819 [2024-07-26 01:16:44.864320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.819 qpair failed and we were unable to recover it. 00:34:14.819 [2024-07-26 01:16:44.864454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.819 [2024-07-26 01:16:44.864480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.819 qpair failed and we were unable to recover it. 00:34:14.819 [2024-07-26 01:16:44.864585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.819 [2024-07-26 01:16:44.864611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.819 qpair failed and we were unable to recover it. 00:34:14.819 [2024-07-26 01:16:44.864777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.819 [2024-07-26 01:16:44.864803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.819 qpair failed and we were unable to recover it. 00:34:14.819 [2024-07-26 01:16:44.864941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.819 [2024-07-26 01:16:44.864968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.819 qpair failed and we were unable to recover it. 00:34:14.819 [2024-07-26 01:16:44.865131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.819 [2024-07-26 01:16:44.865158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.819 qpair failed and we were unable to recover it. 00:34:14.819 [2024-07-26 01:16:44.866048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.819 [2024-07-26 01:16:44.866087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.819 qpair failed and we were unable to recover it. 00:34:14.819 [2024-07-26 01:16:44.866234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.819 [2024-07-26 01:16:44.866260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.819 qpair failed and we were unable to recover it. 00:34:14.819 [2024-07-26 01:16:44.866429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.819 [2024-07-26 01:16:44.866456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.819 qpair failed and we were unable to recover it. 00:34:14.819 [2024-07-26 01:16:44.866594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.819 [2024-07-26 01:16:44.866620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.819 qpair failed and we were unable to recover it. 00:34:14.819 [2024-07-26 01:16:44.866761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.819 [2024-07-26 01:16:44.866787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.819 qpair failed and we were unable to recover it. 00:34:14.819 [2024-07-26 01:16:44.866929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.819 [2024-07-26 01:16:44.866957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.819 qpair failed and we were unable to recover it. 00:34:14.819 [2024-07-26 01:16:44.867073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.820 [2024-07-26 01:16:44.867100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.820 qpair failed and we were unable to recover it. 00:34:14.820 [2024-07-26 01:16:44.867250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.820 [2024-07-26 01:16:44.867276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.820 qpair failed and we were unable to recover it. 00:34:14.820 [2024-07-26 01:16:44.867398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.820 [2024-07-26 01:16:44.867424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.820 qpair failed and we were unable to recover it. 00:34:14.820 [2024-07-26 01:16:44.867589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.820 [2024-07-26 01:16:44.867615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.820 qpair failed and we were unable to recover it. 00:34:14.820 [2024-07-26 01:16:44.867739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.820 [2024-07-26 01:16:44.867765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.820 qpair failed and we were unable to recover it. 00:34:14.820 [2024-07-26 01:16:44.867901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.820 [2024-07-26 01:16:44.867927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.820 qpair failed and we were unable to recover it. 00:34:14.820 [2024-07-26 01:16:44.868091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.820 [2024-07-26 01:16:44.868118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.820 qpair failed and we were unable to recover it. 00:34:14.820 [2024-07-26 01:16:44.868257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.820 [2024-07-26 01:16:44.868283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.820 qpair failed and we were unable to recover it. 00:34:14.820 [2024-07-26 01:16:44.868394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.820 [2024-07-26 01:16:44.868420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.820 qpair failed and we were unable to recover it. 00:34:14.820 [2024-07-26 01:16:44.868555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.820 [2024-07-26 01:16:44.868595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.820 qpair failed and we were unable to recover it. 00:34:14.820 [2024-07-26 01:16:44.868748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.820 [2024-07-26 01:16:44.868776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.820 qpair failed and we were unable to recover it. 00:34:14.820 [2024-07-26 01:16:44.868919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.820 [2024-07-26 01:16:44.868950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.820 qpair failed and we were unable to recover it. 00:34:14.820 [2024-07-26 01:16:44.869089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.820 [2024-07-26 01:16:44.869117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.820 qpair failed and we were unable to recover it. 00:34:14.820 [2024-07-26 01:16:44.869258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.820 [2024-07-26 01:16:44.869285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.820 qpair failed and we were unable to recover it. 00:34:14.820 [2024-07-26 01:16:44.869414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.820 [2024-07-26 01:16:44.869441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.820 qpair failed and we were unable to recover it. 00:34:14.820 [2024-07-26 01:16:44.869582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.820 [2024-07-26 01:16:44.869610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.820 qpair failed and we were unable to recover it. 00:34:14.820 [2024-07-26 01:16:44.869776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.820 [2024-07-26 01:16:44.869804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.820 qpair failed and we were unable to recover it. 00:34:14.820 [2024-07-26 01:16:44.869946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.820 [2024-07-26 01:16:44.869976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.820 qpair failed and we were unable to recover it. 00:34:14.820 [2024-07-26 01:16:44.870118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.820 [2024-07-26 01:16:44.870146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.820 qpair failed and we were unable to recover it. 00:34:14.820 [2024-07-26 01:16:44.870293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.820 [2024-07-26 01:16:44.870320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.820 qpair failed and we were unable to recover it. 00:34:14.820 [2024-07-26 01:16:44.870463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.820 [2024-07-26 01:16:44.870491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.820 qpair failed and we were unable to recover it. 00:34:14.820 [2024-07-26 01:16:44.870603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.820 [2024-07-26 01:16:44.870634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.820 qpair failed and we were unable to recover it. 00:34:14.820 [2024-07-26 01:16:44.870773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.820 [2024-07-26 01:16:44.870804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.820 qpair failed and we were unable to recover it. 00:34:14.820 [2024-07-26 01:16:44.870947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.820 [2024-07-26 01:16:44.870975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.820 qpair failed and we were unable to recover it. 00:34:14.820 [2024-07-26 01:16:44.871103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.820 [2024-07-26 01:16:44.871132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.820 qpair failed and we were unable to recover it. 00:34:14.820 [2024-07-26 01:16:44.871242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.820 [2024-07-26 01:16:44.871271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.820 qpair failed and we were unable to recover it. 00:34:14.820 [2024-07-26 01:16:44.871413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.820 [2024-07-26 01:16:44.871440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.820 qpair failed and we were unable to recover it. 00:34:14.820 [2024-07-26 01:16:44.871583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.820 [2024-07-26 01:16:44.871611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.820 qpair failed and we were unable to recover it. 00:34:14.820 [2024-07-26 01:16:44.871748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.820 [2024-07-26 01:16:44.871775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.820 qpair failed and we were unable to recover it. 00:34:14.820 [2024-07-26 01:16:44.871911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.820 [2024-07-26 01:16:44.871938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.820 qpair failed and we were unable to recover it. 00:34:14.820 [2024-07-26 01:16:44.872103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.820 [2024-07-26 01:16:44.872131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.820 qpair failed and we were unable to recover it. 00:34:14.820 [2024-07-26 01:16:44.872269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.820 [2024-07-26 01:16:44.872297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.820 qpair failed and we were unable to recover it. 00:34:14.820 [2024-07-26 01:16:44.872433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.820 [2024-07-26 01:16:44.872460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.820 qpair failed and we were unable to recover it. 00:34:14.820 [2024-07-26 01:16:44.872582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.820 [2024-07-26 01:16:44.872610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.820 qpair failed and we were unable to recover it. 00:34:14.820 [2024-07-26 01:16:44.872742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.820 [2024-07-26 01:16:44.872768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.820 qpair failed and we were unable to recover it. 00:34:14.820 [2024-07-26 01:16:44.872905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.820 [2024-07-26 01:16:44.872932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.820 qpair failed and we were unable to recover it. 00:34:14.820 [2024-07-26 01:16:44.873079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.820 [2024-07-26 01:16:44.873106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.820 qpair failed and we were unable to recover it. 00:34:14.820 [2024-07-26 01:16:44.873258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.820 [2024-07-26 01:16:44.873285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.820 qpair failed and we were unable to recover it. 00:34:14.821 [2024-07-26 01:16:44.873393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.821 [2024-07-26 01:16:44.873419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-07-26 01:16:44.873562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.821 [2024-07-26 01:16:44.873588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-07-26 01:16:44.873709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.821 [2024-07-26 01:16:44.873736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-07-26 01:16:44.873871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.821 [2024-07-26 01:16:44.873899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-07-26 01:16:44.874013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.821 [2024-07-26 01:16:44.874040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-07-26 01:16:44.874163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.821 [2024-07-26 01:16:44.874190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-07-26 01:16:44.874295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.821 [2024-07-26 01:16:44.874325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-07-26 01:16:44.874496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.821 [2024-07-26 01:16:44.874538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-07-26 01:16:44.874705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.821 [2024-07-26 01:16:44.874731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-07-26 01:16:44.874890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.821 [2024-07-26 01:16:44.874917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-07-26 01:16:44.875070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.821 [2024-07-26 01:16:44.875114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-07-26 01:16:44.875247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.821 [2024-07-26 01:16:44.875287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-07-26 01:16:44.875464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.821 [2024-07-26 01:16:44.875491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-07-26 01:16:44.875607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.821 [2024-07-26 01:16:44.875634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-07-26 01:16:44.875769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.821 [2024-07-26 01:16:44.875796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-07-26 01:16:44.875904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.821 [2024-07-26 01:16:44.875930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-07-26 01:16:44.876093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.821 [2024-07-26 01:16:44.876120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-07-26 01:16:44.876228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.821 [2024-07-26 01:16:44.876255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-07-26 01:16:44.876388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.821 [2024-07-26 01:16:44.876414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-07-26 01:16:44.876522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.821 [2024-07-26 01:16:44.876548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-07-26 01:16:44.876695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.821 [2024-07-26 01:16:44.876721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-07-26 01:16:44.876832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.821 [2024-07-26 01:16:44.876858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-07-26 01:16:44.876963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.821 [2024-07-26 01:16:44.876989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-07-26 01:16:44.877129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.821 [2024-07-26 01:16:44.877157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-07-26 01:16:44.877273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.821 [2024-07-26 01:16:44.877300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-07-26 01:16:44.877476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.821 [2024-07-26 01:16:44.877503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-07-26 01:16:44.877640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.821 [2024-07-26 01:16:44.877666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-07-26 01:16:44.877798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.821 [2024-07-26 01:16:44.877824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-07-26 01:16:44.877962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.821 [2024-07-26 01:16:44.877988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-07-26 01:16:44.878135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.821 [2024-07-26 01:16:44.878161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-07-26 01:16:44.878307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.821 [2024-07-26 01:16:44.878333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-07-26 01:16:44.878444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.821 [2024-07-26 01:16:44.878470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-07-26 01:16:44.878598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.821 [2024-07-26 01:16:44.878624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-07-26 01:16:44.878740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.821 [2024-07-26 01:16:44.878768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-07-26 01:16:44.878870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.821 [2024-07-26 01:16:44.878897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-07-26 01:16:44.879055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.821 [2024-07-26 01:16:44.879087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-07-26 01:16:44.879216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.821 [2024-07-26 01:16:44.879243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-07-26 01:16:44.879352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.821 [2024-07-26 01:16:44.879378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-07-26 01:16:44.879509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.822 [2024-07-26 01:16:44.879540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-07-26 01:16:44.879682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.822 [2024-07-26 01:16:44.879709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-07-26 01:16:44.879831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.822 [2024-07-26 01:16:44.879857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-07-26 01:16:44.879960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.822 [2024-07-26 01:16:44.879986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-07-26 01:16:44.880093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.822 [2024-07-26 01:16:44.880121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-07-26 01:16:44.880227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.822 [2024-07-26 01:16:44.880254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-07-26 01:16:44.880350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.822 [2024-07-26 01:16:44.880376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-07-26 01:16:44.880487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.822 [2024-07-26 01:16:44.880513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-07-26 01:16:44.880615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.822 [2024-07-26 01:16:44.880641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-07-26 01:16:44.880770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.822 [2024-07-26 01:16:44.880796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-07-26 01:16:44.880932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.822 [2024-07-26 01:16:44.880957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-07-26 01:16:44.881068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.822 [2024-07-26 01:16:44.881096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-07-26 01:16:44.881235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.822 [2024-07-26 01:16:44.881262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-07-26 01:16:44.881374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.822 [2024-07-26 01:16:44.881400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-07-26 01:16:44.881516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.822 [2024-07-26 01:16:44.881543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-07-26 01:16:44.881671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.822 [2024-07-26 01:16:44.881697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-07-26 01:16:44.881797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.822 [2024-07-26 01:16:44.881824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-07-26 01:16:44.881964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.822 [2024-07-26 01:16:44.881990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-07-26 01:16:44.882104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.822 [2024-07-26 01:16:44.882130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-07-26 01:16:44.882242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.822 [2024-07-26 01:16:44.882268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-07-26 01:16:44.882428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.822 [2024-07-26 01:16:44.882454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-07-26 01:16:44.882590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.822 [2024-07-26 01:16:44.882616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-07-26 01:16:44.882752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.822 [2024-07-26 01:16:44.882778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-07-26 01:16:44.882910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.822 [2024-07-26 01:16:44.882936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-07-26 01:16:44.883035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.822 [2024-07-26 01:16:44.883066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-07-26 01:16:44.883211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.822 [2024-07-26 01:16:44.883237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-07-26 01:16:44.883337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.822 [2024-07-26 01:16:44.883363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-07-26 01:16:44.883482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.822 [2024-07-26 01:16:44.883509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-07-26 01:16:44.883643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.822 [2024-07-26 01:16:44.883670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-07-26 01:16:44.883809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.822 [2024-07-26 01:16:44.883837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-07-26 01:16:44.883972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.822 [2024-07-26 01:16:44.883999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-07-26 01:16:44.884117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.823 [2024-07-26 01:16:44.884144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-26 01:16:44.884280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.823 [2024-07-26 01:16:44.884305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-26 01:16:44.884449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.823 [2024-07-26 01:16:44.884476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-26 01:16:44.884579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.823 [2024-07-26 01:16:44.884605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-26 01:16:44.884715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.823 [2024-07-26 01:16:44.884741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-26 01:16:44.884874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.823 [2024-07-26 01:16:44.884901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-26 01:16:44.885008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.823 [2024-07-26 01:16:44.885035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-26 01:16:44.885173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.823 [2024-07-26 01:16:44.885199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-26 01:16:44.885337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.823 [2024-07-26 01:16:44.885363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-26 01:16:44.885525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.823 [2024-07-26 01:16:44.885551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-26 01:16:44.885689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.823 [2024-07-26 01:16:44.885719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-26 01:16:44.885860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.823 [2024-07-26 01:16:44.885886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-26 01:16:44.885990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.823 [2024-07-26 01:16:44.886016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-26 01:16:44.886179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.823 [2024-07-26 01:16:44.886206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-26 01:16:44.886370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.823 [2024-07-26 01:16:44.886396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-26 01:16:44.886530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.823 [2024-07-26 01:16:44.886556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-26 01:16:44.886666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.823 [2024-07-26 01:16:44.886693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-26 01:16:44.886806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.823 [2024-07-26 01:16:44.886833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-26 01:16:44.886992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.823 [2024-07-26 01:16:44.887018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-26 01:16:44.887159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.823 [2024-07-26 01:16:44.887186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-26 01:16:44.887335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.823 [2024-07-26 01:16:44.887361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-26 01:16:44.887498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.823 [2024-07-26 01:16:44.887524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-26 01:16:44.887685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.823 [2024-07-26 01:16:44.887711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-26 01:16:44.887818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.823 [2024-07-26 01:16:44.887844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-26 01:16:44.887957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.823 [2024-07-26 01:16:44.887984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-26 01:16:44.888149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.823 [2024-07-26 01:16:44.888176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-26 01:16:44.888291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.823 [2024-07-26 01:16:44.888317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-26 01:16:44.888418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.823 [2024-07-26 01:16:44.888444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-26 01:16:44.888546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.823 [2024-07-26 01:16:44.888572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-26 01:16:44.888711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.823 [2024-07-26 01:16:44.888738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-26 01:16:44.888897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.823 [2024-07-26 01:16:44.888923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-26 01:16:44.889022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.823 [2024-07-26 01:16:44.889049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-26 01:16:44.889218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.823 [2024-07-26 01:16:44.889244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-26 01:16:44.889348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.823 [2024-07-26 01:16:44.889373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-26 01:16:44.889479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.823 [2024-07-26 01:16:44.889506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-26 01:16:44.889646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.823 [2024-07-26 01:16:44.889672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-26 01:16:44.889806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.823 [2024-07-26 01:16:44.889833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-26 01:16:44.889943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.823 [2024-07-26 01:16:44.889974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.823 qpair failed and we were unable to recover it. 00:34:14.823 [2024-07-26 01:16:44.890116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.823 [2024-07-26 01:16:44.890143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-26 01:16:44.890288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.824 [2024-07-26 01:16:44.890328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-26 01:16:44.890494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.824 [2024-07-26 01:16:44.890522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-26 01:16:44.890637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.824 [2024-07-26 01:16:44.890663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-26 01:16:44.890772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.824 [2024-07-26 01:16:44.890798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-26 01:16:44.890962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.824 [2024-07-26 01:16:44.890988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-26 01:16:44.891103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.824 [2024-07-26 01:16:44.891131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-26 01:16:44.891248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.824 [2024-07-26 01:16:44.891274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-26 01:16:44.891382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.824 [2024-07-26 01:16:44.891408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-26 01:16:44.891548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.824 [2024-07-26 01:16:44.891574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-26 01:16:44.891712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.824 [2024-07-26 01:16:44.891737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-26 01:16:44.891871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.824 [2024-07-26 01:16:44.891897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-26 01:16:44.892007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.824 [2024-07-26 01:16:44.892033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-26 01:16:44.892186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.824 [2024-07-26 01:16:44.892212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-26 01:16:44.892314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.824 [2024-07-26 01:16:44.892341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-26 01:16:44.892487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.824 [2024-07-26 01:16:44.892512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-26 01:16:44.892651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.824 [2024-07-26 01:16:44.892677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-26 01:16:44.892811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.824 [2024-07-26 01:16:44.892837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-26 01:16:44.892944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.824 [2024-07-26 01:16:44.892970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-26 01:16:44.893142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.824 [2024-07-26 01:16:44.893170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-26 01:16:44.893279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.824 [2024-07-26 01:16:44.893305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-26 01:16:44.893413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.824 [2024-07-26 01:16:44.893439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-26 01:16:44.893548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.824 [2024-07-26 01:16:44.893575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-26 01:16:44.893708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.824 [2024-07-26 01:16:44.893734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-26 01:16:44.893872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.824 [2024-07-26 01:16:44.893898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-26 01:16:44.894052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.824 [2024-07-26 01:16:44.894086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-26 01:16:44.894218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.824 [2024-07-26 01:16:44.894248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-26 01:16:44.894366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.824 [2024-07-26 01:16:44.894393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-26 01:16:44.894535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.824 [2024-07-26 01:16:44.894562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-26 01:16:44.894696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.824 [2024-07-26 01:16:44.894722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-26 01:16:44.894863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.824 [2024-07-26 01:16:44.894890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-26 01:16:44.895029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.824 [2024-07-26 01:16:44.895054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-26 01:16:44.895181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.824 [2024-07-26 01:16:44.895208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-26 01:16:44.895325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.824 [2024-07-26 01:16:44.895351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-26 01:16:44.895512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.824 [2024-07-26 01:16:44.895538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-26 01:16:44.895647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.824 [2024-07-26 01:16:44.895673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-26 01:16:44.895807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.824 [2024-07-26 01:16:44.895833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-26 01:16:44.895970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.824 [2024-07-26 01:16:44.895996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.824 qpair failed and we were unable to recover it. 00:34:14.824 [2024-07-26 01:16:44.896135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.825 [2024-07-26 01:16:44.896162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.825 qpair failed and we were unable to recover it. 00:34:14.825 [2024-07-26 01:16:44.896275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.825 [2024-07-26 01:16:44.896301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.825 qpair failed and we were unable to recover it. 00:34:14.825 [2024-07-26 01:16:44.896443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.825 [2024-07-26 01:16:44.896469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.825 qpair failed and we were unable to recover it. 00:34:14.825 [2024-07-26 01:16:44.896611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.825 [2024-07-26 01:16:44.896638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.825 qpair failed and we were unable to recover it. 00:34:14.825 [2024-07-26 01:16:44.896802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.825 [2024-07-26 01:16:44.896828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.825 qpair failed and we were unable to recover it. 00:34:14.825 [2024-07-26 01:16:44.896964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.825 [2024-07-26 01:16:44.896990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.825 qpair failed and we were unable to recover it. 00:34:14.825 [2024-07-26 01:16:44.897099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.825 [2024-07-26 01:16:44.897126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.825 qpair failed and we were unable to recover it. 00:34:14.825 [2024-07-26 01:16:44.897246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.825 [2024-07-26 01:16:44.897272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.825 qpair failed and we were unable to recover it. 00:34:14.825 [2024-07-26 01:16:44.897412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.825 [2024-07-26 01:16:44.897439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.825 qpair failed and we were unable to recover it. 00:34:14.825 [2024-07-26 01:16:44.897603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.825 [2024-07-26 01:16:44.897629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.825 qpair failed and we were unable to recover it. 00:34:14.825 [2024-07-26 01:16:44.897734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.825 [2024-07-26 01:16:44.897760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.825 qpair failed and we were unable to recover it. 00:34:14.825 [2024-07-26 01:16:44.897929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.825 [2024-07-26 01:16:44.897956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.825 qpair failed and we were unable to recover it. 00:34:14.825 [2024-07-26 01:16:44.898065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.825 [2024-07-26 01:16:44.898092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.825 qpair failed and we were unable to recover it. 00:34:14.825 [2024-07-26 01:16:44.898225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.825 [2024-07-26 01:16:44.898251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.825 qpair failed and we were unable to recover it. 00:34:14.825 [2024-07-26 01:16:44.898360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.825 [2024-07-26 01:16:44.898388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.825 qpair failed and we were unable to recover it. 00:34:14.825 [2024-07-26 01:16:44.898563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.825 [2024-07-26 01:16:44.898589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.825 qpair failed and we were unable to recover it. 00:34:14.825 [2024-07-26 01:16:44.898716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.825 [2024-07-26 01:16:44.898743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.825 qpair failed and we were unable to recover it. 00:34:14.825 [2024-07-26 01:16:44.898879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.825 [2024-07-26 01:16:44.898905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.825 qpair failed and we were unable to recover it. 00:34:14.825 [2024-07-26 01:16:44.899040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.825 [2024-07-26 01:16:44.899084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.825 qpair failed and we were unable to recover it. 00:34:14.825 [2024-07-26 01:16:44.899221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.825 [2024-07-26 01:16:44.899248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.825 qpair failed and we were unable to recover it. 00:34:14.825 [2024-07-26 01:16:44.899382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.825 [2024-07-26 01:16:44.899408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.825 qpair failed and we were unable to recover it. 00:34:14.825 [2024-07-26 01:16:44.899526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.825 [2024-07-26 01:16:44.899553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.825 qpair failed and we were unable to recover it. 00:34:14.825 [2024-07-26 01:16:44.899685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.825 [2024-07-26 01:16:44.899711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.825 qpair failed and we were unable to recover it. 00:34:14.825 [2024-07-26 01:16:44.899849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.825 [2024-07-26 01:16:44.899875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.825 qpair failed and we were unable to recover it. 00:34:14.825 [2024-07-26 01:16:44.900009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.825 [2024-07-26 01:16:44.900035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.825 qpair failed and we were unable to recover it. 00:34:14.825 [2024-07-26 01:16:44.900210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.825 [2024-07-26 01:16:44.900248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.825 qpair failed and we were unable to recover it. 00:34:14.825 [2024-07-26 01:16:44.900401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.825 [2024-07-26 01:16:44.900431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.825 qpair failed and we were unable to recover it. 00:34:14.825 [2024-07-26 01:16:44.900550] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:34:14.825 [2024-07-26 01:16:44.900580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.825 [2024-07-26 01:16:44.900610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.825 [2024-07-26 01:16:44.900623] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:14.825 qpair failed and we were unable to recover it. 00:34:14.825 [2024-07-26 01:16:44.900734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.825 [2024-07-26 01:16:44.900762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.825 qpair failed and we were unable to recover it. 00:34:14.825 [2024-07-26 01:16:44.900929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.825 [2024-07-26 01:16:44.900956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.825 qpair failed and we were unable to recover it. 00:34:14.825 [2024-07-26 01:16:44.901111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.825 [2024-07-26 01:16:44.901140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.825 qpair failed and we were unable to recover it. 00:34:14.825 [2024-07-26 01:16:44.901255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.825 [2024-07-26 01:16:44.901283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.825 qpair failed and we were unable to recover it. 00:34:14.825 [2024-07-26 01:16:44.901449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.825 [2024-07-26 01:16:44.901477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.825 qpair failed and we were unable to recover it. 00:34:14.825 [2024-07-26 01:16:44.901619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.825 [2024-07-26 01:16:44.901647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.825 qpair failed and we were unable to recover it. 00:34:14.825 [2024-07-26 01:16:44.901785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.825 [2024-07-26 01:16:44.901813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.825 qpair failed and we were unable to recover it. 00:34:14.825 [2024-07-26 01:16:44.901931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.825 [2024-07-26 01:16:44.901957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.825 qpair failed and we were unable to recover it. 00:34:14.825 [2024-07-26 01:16:44.902101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.825 [2024-07-26 01:16:44.902128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.826 qpair failed and we were unable to recover it. 00:34:14.826 [2024-07-26 01:16:44.902267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.826 [2024-07-26 01:16:44.902294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.826 qpair failed and we were unable to recover it. 00:34:14.826 [2024-07-26 01:16:44.902449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.826 [2024-07-26 01:16:44.902475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.826 qpair failed and we were unable to recover it. 00:34:14.826 [2024-07-26 01:16:44.902610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.826 [2024-07-26 01:16:44.902636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.826 qpair failed and we were unable to recover it. 00:34:14.826 [2024-07-26 01:16:44.902775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.826 [2024-07-26 01:16:44.902805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.826 qpair failed and we were unable to recover it. 00:34:14.826 [2024-07-26 01:16:44.902940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.826 [2024-07-26 01:16:44.902967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.826 qpair failed and we were unable to recover it. 00:34:14.826 [2024-07-26 01:16:44.903095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.826 [2024-07-26 01:16:44.903122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.826 qpair failed and we were unable to recover it. 00:34:14.826 [2024-07-26 01:16:44.903256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.826 [2024-07-26 01:16:44.903283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.826 qpair failed and we were unable to recover it. 00:34:14.826 [2024-07-26 01:16:44.903384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.826 [2024-07-26 01:16:44.903410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.826 qpair failed and we were unable to recover it. 00:34:14.826 [2024-07-26 01:16:44.903519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.826 [2024-07-26 01:16:44.903545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.826 qpair failed and we were unable to recover it. 00:34:14.826 [2024-07-26 01:16:44.903682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.826 [2024-07-26 01:16:44.903709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.826 qpair failed and we were unable to recover it. 00:34:14.826 [2024-07-26 01:16:44.903841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.826 [2024-07-26 01:16:44.903869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.826 qpair failed and we were unable to recover it. 00:34:14.826 [2024-07-26 01:16:44.904006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.826 [2024-07-26 01:16:44.904032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.826 qpair failed and we were unable to recover it. 00:34:14.826 [2024-07-26 01:16:44.904192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.826 [2024-07-26 01:16:44.904219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.826 qpair failed and we were unable to recover it. 00:34:14.826 [2024-07-26 01:16:44.904350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.826 [2024-07-26 01:16:44.904380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.826 qpair failed and we were unable to recover it. 00:34:14.826 [2024-07-26 01:16:44.904495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.826 [2024-07-26 01:16:44.904521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.826 qpair failed and we were unable to recover it. 00:34:14.826 [2024-07-26 01:16:44.904635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.826 [2024-07-26 01:16:44.904661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.826 qpair failed and we were unable to recover it. 00:34:14.826 [2024-07-26 01:16:44.904762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.826 [2024-07-26 01:16:44.904788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.826 qpair failed and we were unable to recover it. 00:34:14.826 [2024-07-26 01:16:44.904900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.826 [2024-07-26 01:16:44.904927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.826 qpair failed and we were unable to recover it. 00:34:14.826 [2024-07-26 01:16:44.905078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.826 [2024-07-26 01:16:44.905105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.826 qpair failed and we were unable to recover it. 00:34:14.826 [2024-07-26 01:16:44.905220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.826 [2024-07-26 01:16:44.905248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.826 qpair failed and we were unable to recover it. 00:34:14.826 [2024-07-26 01:16:44.905396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.826 [2024-07-26 01:16:44.905422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.826 qpair failed and we were unable to recover it. 00:34:14.826 [2024-07-26 01:16:44.905530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.826 [2024-07-26 01:16:44.905556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.826 qpair failed and we were unable to recover it. 00:34:14.826 [2024-07-26 01:16:44.905690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.826 [2024-07-26 01:16:44.905716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.826 qpair failed and we were unable to recover it. 00:34:14.826 [2024-07-26 01:16:44.905830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.826 [2024-07-26 01:16:44.905856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.826 qpair failed and we were unable to recover it. 00:34:14.826 [2024-07-26 01:16:44.905971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.826 [2024-07-26 01:16:44.905998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.826 qpair failed and we were unable to recover it. 00:34:14.826 [2024-07-26 01:16:44.906138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.826 [2024-07-26 01:16:44.906164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.826 qpair failed and we were unable to recover it. 00:34:14.826 [2024-07-26 01:16:44.906312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.826 [2024-07-26 01:16:44.906338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.826 qpair failed and we were unable to recover it. 00:34:14.826 [2024-07-26 01:16:44.906452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.826 [2024-07-26 01:16:44.906478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.826 qpair failed and we were unable to recover it. 00:34:14.826 [2024-07-26 01:16:44.906648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.826 [2024-07-26 01:16:44.906673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.826 qpair failed and we were unable to recover it. 00:34:14.826 [2024-07-26 01:16:44.906835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.826 [2024-07-26 01:16:44.906862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.826 qpair failed and we were unable to recover it. 00:34:14.826 [2024-07-26 01:16:44.906989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.826 [2024-07-26 01:16:44.907025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.826 qpair failed and we were unable to recover it. 00:34:14.826 [2024-07-26 01:16:44.907188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.826 [2024-07-26 01:16:44.907218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.826 qpair failed and we were unable to recover it. 00:34:14.826 [2024-07-26 01:16:44.907335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.826 [2024-07-26 01:16:44.907367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.826 qpair failed and we were unable to recover it. 00:34:14.826 [2024-07-26 01:16:44.907505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.826 [2024-07-26 01:16:44.907534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.826 qpair failed and we were unable to recover it. 00:34:14.826 [2024-07-26 01:16:44.907650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.826 [2024-07-26 01:16:44.907679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.826 qpair failed and we were unable to recover it. 00:34:14.826 [2024-07-26 01:16:44.907822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.826 [2024-07-26 01:16:44.907850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.826 qpair failed and we were unable to recover it. 00:34:14.826 [2024-07-26 01:16:44.907991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.826 [2024-07-26 01:16:44.908021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.826 qpair failed and we were unable to recover it. 00:34:14.826 [2024-07-26 01:16:44.908202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.827 [2024-07-26 01:16:44.908232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.827 qpair failed and we were unable to recover it. 00:34:14.827 [2024-07-26 01:16:44.908373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.827 [2024-07-26 01:16:44.908402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.827 qpair failed and we were unable to recover it. 00:34:14.827 [2024-07-26 01:16:44.908544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.827 [2024-07-26 01:16:44.908572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.827 qpair failed and we were unable to recover it. 00:34:14.827 [2024-07-26 01:16:44.908712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.827 [2024-07-26 01:16:44.908740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.827 qpair failed and we were unable to recover it. 00:34:14.827 [2024-07-26 01:16:44.908855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.827 [2024-07-26 01:16:44.908882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.827 qpair failed and we were unable to recover it. 00:34:14.827 [2024-07-26 01:16:44.909001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.827 [2024-07-26 01:16:44.909030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.827 qpair failed and we were unable to recover it. 00:34:14.827 [2024-07-26 01:16:44.909193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.827 [2024-07-26 01:16:44.909237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.827 qpair failed and we were unable to recover it. 00:34:14.827 [2024-07-26 01:16:44.909357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.827 [2024-07-26 01:16:44.909385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.827 qpair failed and we were unable to recover it. 00:34:14.827 [2024-07-26 01:16:44.909493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.827 [2024-07-26 01:16:44.909519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.827 qpair failed and we were unable to recover it. 00:34:14.827 [2024-07-26 01:16:44.909678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.827 [2024-07-26 01:16:44.909716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.827 qpair failed and we were unable to recover it. 00:34:14.827 [2024-07-26 01:16:44.909819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.827 [2024-07-26 01:16:44.909845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.827 qpair failed and we were unable to recover it. 00:34:14.827 [2024-07-26 01:16:44.909947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.827 [2024-07-26 01:16:44.909973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.827 qpair failed and we were unable to recover it. 00:34:14.827 [2024-07-26 01:16:44.910103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.827 [2024-07-26 01:16:44.910130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.827 qpair failed and we were unable to recover it. 00:34:14.827 [2024-07-26 01:16:44.910265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.827 [2024-07-26 01:16:44.910291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.827 qpair failed and we were unable to recover it. 00:34:14.827 [2024-07-26 01:16:44.910441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.827 [2024-07-26 01:16:44.910467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.827 qpair failed and we were unable to recover it. 00:34:14.827 [2024-07-26 01:16:44.910581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.827 [2024-07-26 01:16:44.910608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.827 qpair failed and we were unable to recover it. 00:34:14.827 [2024-07-26 01:16:44.910766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.827 [2024-07-26 01:16:44.910792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.827 qpair failed and we were unable to recover it. 00:34:14.827 [2024-07-26 01:16:44.910925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.827 [2024-07-26 01:16:44.910951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.827 qpair failed and we were unable to recover it. 00:34:14.827 [2024-07-26 01:16:44.911071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.827 [2024-07-26 01:16:44.911098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.827 qpair failed and we were unable to recover it. 00:34:14.827 [2024-07-26 01:16:44.911205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.827 [2024-07-26 01:16:44.911242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.827 qpair failed and we were unable to recover it. 00:34:14.827 [2024-07-26 01:16:44.911412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.827 [2024-07-26 01:16:44.911451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.827 qpair failed and we were unable to recover it. 00:34:14.827 [2024-07-26 01:16:44.911615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.827 [2024-07-26 01:16:44.911647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.827 qpair failed and we were unable to recover it. 00:34:14.827 [2024-07-26 01:16:44.911755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.827 [2024-07-26 01:16:44.911782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.827 qpair failed and we were unable to recover it. 00:34:14.827 [2024-07-26 01:16:44.911914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.827 [2024-07-26 01:16:44.911940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.827 qpair failed and we were unable to recover it. 00:34:14.827 [2024-07-26 01:16:44.912066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.827 [2024-07-26 01:16:44.912093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.827 qpair failed and we were unable to recover it. 00:34:14.827 [2024-07-26 01:16:44.912226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.827 [2024-07-26 01:16:44.912252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.827 qpair failed and we were unable to recover it. 00:34:14.827 [2024-07-26 01:16:44.912386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.827 [2024-07-26 01:16:44.912412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.827 qpair failed and we were unable to recover it. 00:34:14.827 [2024-07-26 01:16:44.912551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.827 [2024-07-26 01:16:44.912578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.827 qpair failed and we were unable to recover it. 00:34:14.827 [2024-07-26 01:16:44.912719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.827 [2024-07-26 01:16:44.912745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.827 qpair failed and we were unable to recover it. 00:34:14.827 [2024-07-26 01:16:44.912840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.827 [2024-07-26 01:16:44.912866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.827 qpair failed and we were unable to recover it. 00:34:14.827 [2024-07-26 01:16:44.912982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.827 [2024-07-26 01:16:44.913020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.827 qpair failed and we were unable to recover it. 00:34:14.827 [2024-07-26 01:16:44.913198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.827 [2024-07-26 01:16:44.913225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.827 qpair failed and we were unable to recover it. 00:34:14.827 [2024-07-26 01:16:44.913339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.827 [2024-07-26 01:16:44.913369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.827 qpair failed and we were unable to recover it. 00:34:14.828 [2024-07-26 01:16:44.913509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.828 [2024-07-26 01:16:44.913540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.828 qpair failed and we were unable to recover it. 00:34:14.828 [2024-07-26 01:16:44.913679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.828 [2024-07-26 01:16:44.913705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.828 qpair failed and we were unable to recover it. 00:34:14.828 [2024-07-26 01:16:44.913813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.828 [2024-07-26 01:16:44.913839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.828 qpair failed and we were unable to recover it. 00:34:14.828 [2024-07-26 01:16:44.914004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.828 [2024-07-26 01:16:44.914031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.828 qpair failed and we were unable to recover it. 00:34:14.828 [2024-07-26 01:16:44.914160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.828 [2024-07-26 01:16:44.914186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.828 qpair failed and we were unable to recover it. 00:34:14.828 [2024-07-26 01:16:44.914291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.828 [2024-07-26 01:16:44.914317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.828 qpair failed and we were unable to recover it. 00:34:14.828 [2024-07-26 01:16:44.914469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.828 [2024-07-26 01:16:44.914496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.828 qpair failed and we were unable to recover it. 00:34:14.828 [2024-07-26 01:16:44.914634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.828 [2024-07-26 01:16:44.914659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.828 qpair failed and we were unable to recover it. 00:34:14.828 [2024-07-26 01:16:44.914794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.828 [2024-07-26 01:16:44.914820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.828 qpair failed and we were unable to recover it. 00:34:14.828 [2024-07-26 01:16:44.914925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.828 [2024-07-26 01:16:44.914951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.828 qpair failed and we were unable to recover it. 00:34:14.828 [2024-07-26 01:16:44.915090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.828 [2024-07-26 01:16:44.915117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.828 qpair failed and we were unable to recover it. 00:34:14.828 [2024-07-26 01:16:44.915259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.828 [2024-07-26 01:16:44.915285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.828 qpair failed and we were unable to recover it. 00:34:14.828 [2024-07-26 01:16:44.915480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.828 [2024-07-26 01:16:44.915506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.828 qpair failed and we were unable to recover it. 00:34:14.828 [2024-07-26 01:16:44.915664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.828 [2024-07-26 01:16:44.915690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.828 qpair failed and we were unable to recover it. 00:34:14.828 [2024-07-26 01:16:44.915828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.828 [2024-07-26 01:16:44.915857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.828 qpair failed and we were unable to recover it. 00:34:14.828 [2024-07-26 01:16:44.915980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.828 [2024-07-26 01:16:44.916006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.828 qpair failed and we were unable to recover it. 00:34:14.828 [2024-07-26 01:16:44.916135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.828 [2024-07-26 01:16:44.916161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.828 qpair failed and we were unable to recover it. 00:34:14.828 [2024-07-26 01:16:44.916275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.828 [2024-07-26 01:16:44.916302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.828 qpair failed and we were unable to recover it. 00:34:14.828 [2024-07-26 01:16:44.916417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.828 [2024-07-26 01:16:44.916443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.828 qpair failed and we were unable to recover it. 00:34:14.828 [2024-07-26 01:16:44.916552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.828 [2024-07-26 01:16:44.916578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.828 qpair failed and we were unable to recover it. 00:34:14.828 [2024-07-26 01:16:44.916684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.828 [2024-07-26 01:16:44.916711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.828 qpair failed and we were unable to recover it. 00:34:14.828 [2024-07-26 01:16:44.916825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.828 [2024-07-26 01:16:44.916852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.828 qpair failed and we were unable to recover it. 00:34:14.828 [2024-07-26 01:16:44.916993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.828 [2024-07-26 01:16:44.917019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.828 qpair failed and we were unable to recover it. 00:34:14.828 [2024-07-26 01:16:44.917129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.828 [2024-07-26 01:16:44.917156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.828 qpair failed and we were unable to recover it. 00:34:14.828 [2024-07-26 01:16:44.917266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.828 [2024-07-26 01:16:44.917292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.828 qpair failed and we were unable to recover it. 00:34:14.828 [2024-07-26 01:16:44.917403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.828 [2024-07-26 01:16:44.917430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.828 qpair failed and we were unable to recover it. 00:34:14.828 [2024-07-26 01:16:44.917596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.828 [2024-07-26 01:16:44.917622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.828 qpair failed and we were unable to recover it. 00:34:14.828 [2024-07-26 01:16:44.917762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.828 [2024-07-26 01:16:44.917791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.828 qpair failed and we were unable to recover it. 00:34:14.828 [2024-07-26 01:16:44.917900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.828 [2024-07-26 01:16:44.917927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.828 qpair failed and we were unable to recover it. 00:34:14.828 [2024-07-26 01:16:44.918092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.828 [2024-07-26 01:16:44.918120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.828 qpair failed and we were unable to recover it. 00:34:14.828 [2024-07-26 01:16:44.918260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.828 [2024-07-26 01:16:44.918287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.828 qpair failed and we were unable to recover it. 00:34:14.828 [2024-07-26 01:16:44.918422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.828 [2024-07-26 01:16:44.918449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.828 qpair failed and we were unable to recover it. 00:34:14.828 [2024-07-26 01:16:44.918584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.828 [2024-07-26 01:16:44.918610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.828 qpair failed and we were unable to recover it. 00:34:14.828 [2024-07-26 01:16:44.918721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.828 [2024-07-26 01:16:44.918748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.828 qpair failed and we were unable to recover it. 00:34:14.828 [2024-07-26 01:16:44.918865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.828 [2024-07-26 01:16:44.918891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.828 qpair failed and we were unable to recover it. 00:34:14.828 [2024-07-26 01:16:44.919026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.828 [2024-07-26 01:16:44.919052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.828 qpair failed and we were unable to recover it. 00:34:14.828 [2024-07-26 01:16:44.919172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.828 [2024-07-26 01:16:44.919198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.828 qpair failed and we were unable to recover it. 00:34:14.828 [2024-07-26 01:16:44.919303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.829 [2024-07-26 01:16:44.919329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.829 qpair failed and we were unable to recover it. 00:34:14.829 [2024-07-26 01:16:44.919460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.829 [2024-07-26 01:16:44.919486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.829 qpair failed and we were unable to recover it. 00:34:14.829 [2024-07-26 01:16:44.919631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.829 [2024-07-26 01:16:44.919658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.829 qpair failed and we were unable to recover it. 00:34:14.829 [2024-07-26 01:16:44.919795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.829 [2024-07-26 01:16:44.919820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.829 qpair failed and we were unable to recover it. 00:34:14.829 [2024-07-26 01:16:44.919929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.829 [2024-07-26 01:16:44.919955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.829 qpair failed and we were unable to recover it. 00:34:14.829 [2024-07-26 01:16:44.920094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.829 [2024-07-26 01:16:44.920122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.829 qpair failed and we were unable to recover it. 00:34:14.829 [2024-07-26 01:16:44.920244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.829 [2024-07-26 01:16:44.920270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.829 qpair failed and we were unable to recover it. 00:34:14.829 [2024-07-26 01:16:44.920379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.829 [2024-07-26 01:16:44.920405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.829 qpair failed and we were unable to recover it. 00:34:14.829 [2024-07-26 01:16:44.920545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.829 [2024-07-26 01:16:44.920572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.829 qpair failed and we were unable to recover it. 00:34:14.829 [2024-07-26 01:16:44.920713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.829 [2024-07-26 01:16:44.920739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.829 qpair failed and we were unable to recover it. 00:34:14.829 [2024-07-26 01:16:44.920875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.829 [2024-07-26 01:16:44.920901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.829 qpair failed and we were unable to recover it. 00:34:14.829 [2024-07-26 01:16:44.921038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.829 [2024-07-26 01:16:44.921090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.829 qpair failed and we were unable to recover it. 00:34:14.829 [2024-07-26 01:16:44.921207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.829 [2024-07-26 01:16:44.921235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.829 qpair failed and we were unable to recover it. 00:34:14.829 [2024-07-26 01:16:44.921345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.829 [2024-07-26 01:16:44.921371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.829 qpair failed and we were unable to recover it. 00:34:14.829 [2024-07-26 01:16:44.921537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.829 [2024-07-26 01:16:44.921564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.829 qpair failed and we were unable to recover it. 00:34:14.829 [2024-07-26 01:16:44.921709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.829 [2024-07-26 01:16:44.921735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.829 qpair failed and we were unable to recover it. 00:34:14.829 [2024-07-26 01:16:44.921836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.829 [2024-07-26 01:16:44.921863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.829 qpair failed and we were unable to recover it. 00:34:14.829 [2024-07-26 01:16:44.922005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.829 [2024-07-26 01:16:44.922036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.829 qpair failed and we were unable to recover it. 00:34:14.829 [2024-07-26 01:16:44.922150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.829 [2024-07-26 01:16:44.922176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.829 qpair failed and we were unable to recover it. 00:34:14.829 [2024-07-26 01:16:44.922312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.829 [2024-07-26 01:16:44.922339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.829 qpair failed and we were unable to recover it. 00:34:14.829 [2024-07-26 01:16:44.922476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.829 [2024-07-26 01:16:44.922502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.829 qpair failed and we were unable to recover it. 00:34:14.829 [2024-07-26 01:16:44.922664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.829 [2024-07-26 01:16:44.922690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.829 qpair failed and we were unable to recover it. 00:34:14.829 [2024-07-26 01:16:44.922800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.829 [2024-07-26 01:16:44.922827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.829 qpair failed and we were unable to recover it. 00:34:14.829 [2024-07-26 01:16:44.922976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.829 [2024-07-26 01:16:44.923003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.829 qpair failed and we were unable to recover it. 00:34:14.829 [2024-07-26 01:16:44.923155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.829 [2024-07-26 01:16:44.923182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.829 qpair failed and we were unable to recover it. 00:34:14.829 [2024-07-26 01:16:44.923300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.829 [2024-07-26 01:16:44.923326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.829 qpair failed and we were unable to recover it. 00:34:14.829 [2024-07-26 01:16:44.923468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.829 [2024-07-26 01:16:44.923494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.829 qpair failed and we were unable to recover it. 00:34:14.829 [2024-07-26 01:16:44.923610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.829 [2024-07-26 01:16:44.923636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.829 qpair failed and we were unable to recover it. 00:34:14.829 [2024-07-26 01:16:44.923795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.829 [2024-07-26 01:16:44.923821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.829 qpair failed and we were unable to recover it. 00:34:14.829 [2024-07-26 01:16:44.923954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.829 [2024-07-26 01:16:44.923980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.829 qpair failed and we were unable to recover it. 00:34:14.829 [2024-07-26 01:16:44.924106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.829 [2024-07-26 01:16:44.924133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.829 qpair failed and we were unable to recover it. 00:34:14.829 [2024-07-26 01:16:44.924252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.829 [2024-07-26 01:16:44.924280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.829 qpair failed and we were unable to recover it. 00:34:14.829 [2024-07-26 01:16:44.924398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.829 [2024-07-26 01:16:44.924425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.829 qpair failed and we were unable to recover it. 00:34:14.829 [2024-07-26 01:16:44.924564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.829 [2024-07-26 01:16:44.924591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.829 qpair failed and we were unable to recover it. 00:34:14.829 [2024-07-26 01:16:44.924740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.829 [2024-07-26 01:16:44.924767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.829 qpair failed and we were unable to recover it. 00:34:14.829 [2024-07-26 01:16:44.924876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.829 [2024-07-26 01:16:44.924903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.829 qpair failed and we were unable to recover it. 00:34:14.829 [2024-07-26 01:16:44.925034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.829 [2024-07-26 01:16:44.925065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.829 qpair failed and we were unable to recover it. 00:34:14.829 [2024-07-26 01:16:44.925205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.829 [2024-07-26 01:16:44.925231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.829 qpair failed and we were unable to recover it. 00:34:14.830 [2024-07-26 01:16:44.925368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.830 [2024-07-26 01:16:44.925394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.830 qpair failed and we were unable to recover it. 00:34:14.830 [2024-07-26 01:16:44.925502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.830 [2024-07-26 01:16:44.925528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.830 qpair failed and we were unable to recover it. 00:34:14.830 [2024-07-26 01:16:44.925633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.830 [2024-07-26 01:16:44.925660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.830 qpair failed and we were unable to recover it. 00:34:14.830 [2024-07-26 01:16:44.925773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.830 [2024-07-26 01:16:44.925800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.830 qpair failed and we were unable to recover it. 00:34:14.830 [2024-07-26 01:16:44.925963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.830 [2024-07-26 01:16:44.925989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.830 qpair failed and we were unable to recover it. 00:34:14.830 [2024-07-26 01:16:44.926127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.830 [2024-07-26 01:16:44.926154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.830 qpair failed and we were unable to recover it. 00:34:14.830 [2024-07-26 01:16:44.926295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.830 [2024-07-26 01:16:44.926322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.830 qpair failed and we were unable to recover it. 00:34:14.830 [2024-07-26 01:16:44.926435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.830 [2024-07-26 01:16:44.926461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.830 qpair failed and we were unable to recover it. 00:34:14.830 [2024-07-26 01:16:44.926621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.830 [2024-07-26 01:16:44.926647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.830 qpair failed and we were unable to recover it. 00:34:14.830 [2024-07-26 01:16:44.926777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.830 [2024-07-26 01:16:44.926803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.830 qpair failed and we were unable to recover it. 00:34:14.830 [2024-07-26 01:16:44.926944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.830 [2024-07-26 01:16:44.926971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.830 qpair failed and we were unable to recover it. 00:34:14.830 [2024-07-26 01:16:44.927111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.830 [2024-07-26 01:16:44.927138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.830 qpair failed and we were unable to recover it. 00:34:14.830 [2024-07-26 01:16:44.927268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.830 [2024-07-26 01:16:44.927295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.830 qpair failed and we were unable to recover it. 00:34:14.830 [2024-07-26 01:16:44.927421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.830 [2024-07-26 01:16:44.927447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.830 qpair failed and we were unable to recover it. 00:34:14.830 [2024-07-26 01:16:44.927557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.830 [2024-07-26 01:16:44.927583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.830 qpair failed and we were unable to recover it. 00:34:14.830 [2024-07-26 01:16:44.927716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.830 [2024-07-26 01:16:44.927743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.830 qpair failed and we were unable to recover it. 00:34:14.830 [2024-07-26 01:16:44.927867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.830 [2024-07-26 01:16:44.927893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.830 qpair failed and we were unable to recover it. 00:34:14.830 [2024-07-26 01:16:44.927995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.830 [2024-07-26 01:16:44.928022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.830 qpair failed and we were unable to recover it. 00:34:14.830 [2024-07-26 01:16:44.928156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.830 [2024-07-26 01:16:44.928183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.830 qpair failed and we were unable to recover it. 00:34:14.830 [2024-07-26 01:16:44.928300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.830 [2024-07-26 01:16:44.928331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.830 qpair failed and we were unable to recover it. 00:34:14.830 [2024-07-26 01:16:44.928470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.830 [2024-07-26 01:16:44.928497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.830 qpair failed and we were unable to recover it. 00:34:14.830 [2024-07-26 01:16:44.928612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.830 [2024-07-26 01:16:44.928639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.830 qpair failed and we were unable to recover it. 00:34:14.830 [2024-07-26 01:16:44.928805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.830 [2024-07-26 01:16:44.928831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.830 qpair failed and we were unable to recover it. 00:34:14.830 [2024-07-26 01:16:44.928980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.830 [2024-07-26 01:16:44.929006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.830 qpair failed and we were unable to recover it. 00:34:14.830 [2024-07-26 01:16:44.929149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.830 [2024-07-26 01:16:44.929176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.830 qpair failed and we were unable to recover it. 00:34:14.830 [2024-07-26 01:16:44.929285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.830 [2024-07-26 01:16:44.929311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.830 qpair failed and we were unable to recover it. 00:34:14.830 [2024-07-26 01:16:44.929470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.830 [2024-07-26 01:16:44.929497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.830 qpair failed and we were unable to recover it. 00:34:14.830 [2024-07-26 01:16:44.929609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.830 [2024-07-26 01:16:44.929635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.830 qpair failed and we were unable to recover it. 00:34:14.830 [2024-07-26 01:16:44.929751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.830 [2024-07-26 01:16:44.929777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.830 qpair failed and we were unable to recover it. 00:34:14.830 [2024-07-26 01:16:44.929934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.830 [2024-07-26 01:16:44.929960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.830 qpair failed and we were unable to recover it. 00:34:14.830 [2024-07-26 01:16:44.930068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.830 [2024-07-26 01:16:44.930095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.830 qpair failed and we were unable to recover it. 00:34:14.830 [2024-07-26 01:16:44.930210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.830 [2024-07-26 01:16:44.930238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.830 qpair failed and we were unable to recover it. 00:34:14.830 [2024-07-26 01:16:44.930343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.830 [2024-07-26 01:16:44.930368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.830 qpair failed and we were unable to recover it. 00:34:14.830 [2024-07-26 01:16:44.930510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.830 [2024-07-26 01:16:44.930536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.830 qpair failed and we were unable to recover it. 00:34:14.830 [2024-07-26 01:16:44.930687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.830 [2024-07-26 01:16:44.930713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.830 qpair failed and we were unable to recover it. 00:34:14.830 [2024-07-26 01:16:44.930820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.830 [2024-07-26 01:16:44.930846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.830 qpair failed and we were unable to recover it. 00:34:14.830 [2024-07-26 01:16:44.931014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.830 [2024-07-26 01:16:44.931041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.830 qpair failed and we were unable to recover it. 00:34:14.830 [2024-07-26 01:16:44.931160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.830 [2024-07-26 01:16:44.931186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.831 qpair failed and we were unable to recover it. 00:34:14.831 [2024-07-26 01:16:44.931286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.831 [2024-07-26 01:16:44.931312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.831 qpair failed and we were unable to recover it. 00:34:14.831 [2024-07-26 01:16:44.931433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.831 [2024-07-26 01:16:44.931459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.831 qpair failed and we were unable to recover it. 00:34:14.831 [2024-07-26 01:16:44.931606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.831 [2024-07-26 01:16:44.931632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.831 qpair failed and we were unable to recover it. 00:34:14.831 [2024-07-26 01:16:44.931767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.831 [2024-07-26 01:16:44.931793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.831 qpair failed and we were unable to recover it. 00:34:14.831 [2024-07-26 01:16:44.931902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.831 [2024-07-26 01:16:44.931929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.831 qpair failed and we were unable to recover it. 00:34:14.831 [2024-07-26 01:16:44.932093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.831 [2024-07-26 01:16:44.932119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.831 qpair failed and we were unable to recover it. 00:34:14.831 [2024-07-26 01:16:44.932228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.831 [2024-07-26 01:16:44.932253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.831 qpair failed and we were unable to recover it. 00:34:14.831 [2024-07-26 01:16:44.932392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.831 [2024-07-26 01:16:44.932418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.831 qpair failed and we were unable to recover it. 00:34:14.831 [2024-07-26 01:16:44.932555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.831 [2024-07-26 01:16:44.932581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.831 qpair failed and we were unable to recover it. 00:34:14.831 [2024-07-26 01:16:44.932742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.831 [2024-07-26 01:16:44.932768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.831 qpair failed and we were unable to recover it. 00:34:14.831 [2024-07-26 01:16:44.932889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.831 [2024-07-26 01:16:44.932917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.831 qpair failed and we were unable to recover it. 00:34:14.831 [2024-07-26 01:16:44.933056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.831 [2024-07-26 01:16:44.933089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.831 qpair failed and we were unable to recover it. 00:34:14.831 [2024-07-26 01:16:44.933228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.831 [2024-07-26 01:16:44.933255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.831 qpair failed and we were unable to recover it. 00:34:14.831 [2024-07-26 01:16:44.933378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.831 [2024-07-26 01:16:44.933404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.831 qpair failed and we were unable to recover it. 00:34:14.831 [2024-07-26 01:16:44.933498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.831 [2024-07-26 01:16:44.933525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.831 qpair failed and we were unable to recover it. 00:34:14.831 [2024-07-26 01:16:44.933644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.831 [2024-07-26 01:16:44.933671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.831 qpair failed and we were unable to recover it. 00:34:14.831 [2024-07-26 01:16:44.933802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.831 [2024-07-26 01:16:44.933828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.831 qpair failed and we were unable to recover it. 00:34:14.831 [2024-07-26 01:16:44.933967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.831 [2024-07-26 01:16:44.933993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.831 qpair failed and we were unable to recover it. 00:34:14.831 [2024-07-26 01:16:44.934101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.831 [2024-07-26 01:16:44.934128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.831 qpair failed and we were unable to recover it. 00:34:14.831 [2024-07-26 01:16:44.934257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.831 [2024-07-26 01:16:44.934283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.831 qpair failed and we were unable to recover it. 00:34:14.831 [2024-07-26 01:16:44.934390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.831 [2024-07-26 01:16:44.934416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.831 qpair failed and we were unable to recover it. 00:34:14.831 [2024-07-26 01:16:44.934522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.831 [2024-07-26 01:16:44.934552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.831 qpair failed and we were unable to recover it. 00:34:14.831 EAL: No free 2048 kB hugepages reported on node 1 00:34:14.831 [2024-07-26 01:16:44.934687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.831 [2024-07-26 01:16:44.934714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.831 qpair failed and we were unable to recover it. 00:34:14.831 [2024-07-26 01:16:44.934855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.831 [2024-07-26 01:16:44.934881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.831 qpair failed and we were unable to recover it. 00:34:14.831 [2024-07-26 01:16:44.935020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.831 [2024-07-26 01:16:44.935047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.831 qpair failed and we were unable to recover it. 00:34:14.831 [2024-07-26 01:16:44.935183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.831 [2024-07-26 01:16:44.935209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.831 qpair failed and we were unable to recover it. 00:34:14.831 [2024-07-26 01:16:44.935347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.831 [2024-07-26 01:16:44.935373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.831 qpair failed and we were unable to recover it. 00:34:14.831 [2024-07-26 01:16:44.935511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.831 [2024-07-26 01:16:44.935538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.831 qpair failed and we were unable to recover it. 00:34:14.831 [2024-07-26 01:16:44.935705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.831 [2024-07-26 01:16:44.935732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.831 qpair failed and we were unable to recover it. 00:34:14.831 [2024-07-26 01:16:44.935858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.831 [2024-07-26 01:16:44.935884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.831 qpair failed and we were unable to recover it. 00:34:14.831 [2024-07-26 01:16:44.935998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.831 [2024-07-26 01:16:44.936025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.831 qpair failed and we were unable to recover it. 00:34:14.831 [2024-07-26 01:16:44.936143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.831 [2024-07-26 01:16:44.936169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.831 qpair failed and we were unable to recover it. 00:34:14.831 [2024-07-26 01:16:44.936317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.832 [2024-07-26 01:16:44.936343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.832 qpair failed and we were unable to recover it. 00:34:14.832 [2024-07-26 01:16:44.936511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.832 [2024-07-26 01:16:44.936537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.832 qpair failed and we were unable to recover it. 00:34:14.832 [2024-07-26 01:16:44.936646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.832 [2024-07-26 01:16:44.936676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.832 qpair failed and we were unable to recover it. 00:34:14.832 [2024-07-26 01:16:44.936818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.832 [2024-07-26 01:16:44.936845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.832 qpair failed and we were unable to recover it. 00:34:14.832 [2024-07-26 01:16:44.936963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.832 [2024-07-26 01:16:44.936990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.832 qpair failed and we were unable to recover it. 00:34:14.832 [2024-07-26 01:16:44.937114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.832 [2024-07-26 01:16:44.937142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.832 qpair failed and we were unable to recover it. 00:34:14.832 [2024-07-26 01:16:44.937277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.832 [2024-07-26 01:16:44.937304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.832 qpair failed and we were unable to recover it. 00:34:14.832 [2024-07-26 01:16:44.937450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.832 [2024-07-26 01:16:44.937475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.832 qpair failed and we were unable to recover it. 00:34:14.832 [2024-07-26 01:16:44.937585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.832 [2024-07-26 01:16:44.937609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.832 qpair failed and we were unable to recover it. 00:34:14.832 [2024-07-26 01:16:44.937712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.832 [2024-07-26 01:16:44.937736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.832 qpair failed and we were unable to recover it. 00:34:14.832 [2024-07-26 01:16:44.937889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.832 [2024-07-26 01:16:44.937914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.832 qpair failed and we were unable to recover it. 00:34:14.832 [2024-07-26 01:16:44.938086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.832 [2024-07-26 01:16:44.938111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.832 qpair failed and we were unable to recover it. 00:34:14.832 [2024-07-26 01:16:44.938221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.832 [2024-07-26 01:16:44.938249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.832 qpair failed and we were unable to recover it. 00:34:14.832 [2024-07-26 01:16:44.938367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.832 [2024-07-26 01:16:44.938393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.832 qpair failed and we were unable to recover it. 00:34:14.832 [2024-07-26 01:16:44.938563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.832 [2024-07-26 01:16:44.938590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.832 qpair failed and we were unable to recover it. 00:34:14.832 [2024-07-26 01:16:44.938729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.832 [2024-07-26 01:16:44.938754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.832 qpair failed and we were unable to recover it. 00:34:14.832 [2024-07-26 01:16:44.938876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.832 [2024-07-26 01:16:44.938903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.832 qpair failed and we were unable to recover it. 00:34:14.832 [2024-07-26 01:16:44.939022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.832 [2024-07-26 01:16:44.939049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.832 qpair failed and we were unable to recover it. 00:34:14.832 [2024-07-26 01:16:44.939222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.832 [2024-07-26 01:16:44.939249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.832 qpair failed and we were unable to recover it. 00:34:14.832 [2024-07-26 01:16:44.939364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.832 [2024-07-26 01:16:44.939390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.832 qpair failed and we were unable to recover it. 00:34:14.832 [2024-07-26 01:16:44.939527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.832 [2024-07-26 01:16:44.939554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.832 qpair failed and we were unable to recover it. 00:34:14.832 [2024-07-26 01:16:44.939667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.832 [2024-07-26 01:16:44.939694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.832 qpair failed and we were unable to recover it. 00:34:14.832 [2024-07-26 01:16:44.939861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.832 [2024-07-26 01:16:44.939888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.832 qpair failed and we were unable to recover it. 00:34:14.832 [2024-07-26 01:16:44.939996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.832 [2024-07-26 01:16:44.940022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.832 qpair failed and we were unable to recover it. 00:34:14.832 [2024-07-26 01:16:44.940165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.832 [2024-07-26 01:16:44.940193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.832 qpair failed and we were unable to recover it. 00:34:14.832 [2024-07-26 01:16:44.940334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.832 [2024-07-26 01:16:44.940361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.832 qpair failed and we were unable to recover it. 00:34:14.832 [2024-07-26 01:16:44.940499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.832 [2024-07-26 01:16:44.940525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.832 qpair failed and we were unable to recover it. 00:34:14.832 [2024-07-26 01:16:44.940646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.832 [2024-07-26 01:16:44.940672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.832 qpair failed and we were unable to recover it. 00:34:14.832 [2024-07-26 01:16:44.940812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.832 [2024-07-26 01:16:44.940839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.832 qpair failed and we were unable to recover it. 00:34:14.832 [2024-07-26 01:16:44.941003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.832 [2024-07-26 01:16:44.941029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.832 qpair failed and we were unable to recover it. 00:34:14.832 [2024-07-26 01:16:44.941174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.832 [2024-07-26 01:16:44.941201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.832 qpair failed and we were unable to recover it. 00:34:14.832 [2024-07-26 01:16:44.941362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.832 [2024-07-26 01:16:44.941388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.832 qpair failed and we were unable to recover it. 00:34:14.832 [2024-07-26 01:16:44.941552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.832 [2024-07-26 01:16:44.941578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.832 qpair failed and we were unable to recover it. 00:34:14.832 [2024-07-26 01:16:44.941682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.832 [2024-07-26 01:16:44.941708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.832 qpair failed and we were unable to recover it. 00:34:14.832 [2024-07-26 01:16:44.941819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.832 [2024-07-26 01:16:44.941846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.832 qpair failed and we were unable to recover it. 00:34:14.832 [2024-07-26 01:16:44.941961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.832 [2024-07-26 01:16:44.941987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.832 qpair failed and we were unable to recover it. 00:34:14.832 [2024-07-26 01:16:44.942111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.832 [2024-07-26 01:16:44.942138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.832 qpair failed and we were unable to recover it. 00:34:14.832 [2024-07-26 01:16:44.942275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.832 [2024-07-26 01:16:44.942302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.832 qpair failed and we were unable to recover it. 00:34:14.833 [2024-07-26 01:16:44.942442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.833 [2024-07-26 01:16:44.942468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.833 qpair failed and we were unable to recover it. 00:34:14.833 [2024-07-26 01:16:44.942586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.833 [2024-07-26 01:16:44.942612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.833 qpair failed and we were unable to recover it. 00:34:14.833 [2024-07-26 01:16:44.942756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.833 [2024-07-26 01:16:44.942783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.833 qpair failed and we were unable to recover it. 00:34:14.833 [2024-07-26 01:16:44.942946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.833 [2024-07-26 01:16:44.942972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.833 qpair failed and we were unable to recover it. 00:34:14.833 [2024-07-26 01:16:44.943082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.833 [2024-07-26 01:16:44.943113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.833 qpair failed and we were unable to recover it. 00:34:14.833 [2024-07-26 01:16:44.943259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.833 [2024-07-26 01:16:44.943286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.833 qpair failed and we were unable to recover it. 00:34:14.833 [2024-07-26 01:16:44.943395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.833 [2024-07-26 01:16:44.943421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.833 qpair failed and we were unable to recover it. 00:34:14.833 [2024-07-26 01:16:44.943523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.833 [2024-07-26 01:16:44.943550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.833 qpair failed and we were unable to recover it. 00:34:14.833 [2024-07-26 01:16:44.943687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.833 [2024-07-26 01:16:44.943714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.833 qpair failed and we were unable to recover it. 00:34:14.833 [2024-07-26 01:16:44.943875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.833 [2024-07-26 01:16:44.943902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.833 qpair failed and we were unable to recover it. 00:34:14.833 [2024-07-26 01:16:44.944051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.833 [2024-07-26 01:16:44.944084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.833 qpair failed and we were unable to recover it. 00:34:14.833 [2024-07-26 01:16:44.944249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.833 [2024-07-26 01:16:44.944276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.833 qpair failed and we were unable to recover it. 00:34:14.833 [2024-07-26 01:16:44.944403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.833 [2024-07-26 01:16:44.944429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.833 qpair failed and we were unable to recover it. 00:34:14.833 [2024-07-26 01:16:44.944575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.833 [2024-07-26 01:16:44.944601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.833 qpair failed and we were unable to recover it. 00:34:14.833 [2024-07-26 01:16:44.944767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.833 [2024-07-26 01:16:44.944793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.833 qpair failed and we were unable to recover it. 00:34:14.833 [2024-07-26 01:16:44.944953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.833 [2024-07-26 01:16:44.944979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.833 qpair failed and we were unable to recover it. 00:34:14.833 [2024-07-26 01:16:44.945113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.833 [2024-07-26 01:16:44.945140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.833 qpair failed and we were unable to recover it. 00:34:14.833 [2024-07-26 01:16:44.945251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.833 [2024-07-26 01:16:44.945277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.833 qpair failed and we were unable to recover it. 00:34:14.833 [2024-07-26 01:16:44.945394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.833 [2024-07-26 01:16:44.945421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.833 qpair failed and we were unable to recover it. 00:34:14.833 [2024-07-26 01:16:44.945527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.833 [2024-07-26 01:16:44.945553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.833 qpair failed and we were unable to recover it. 00:34:14.833 [2024-07-26 01:16:44.945679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.833 [2024-07-26 01:16:44.945706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.833 qpair failed and we were unable to recover it. 00:34:14.833 [2024-07-26 01:16:44.945852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.833 [2024-07-26 01:16:44.945878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.833 qpair failed and we were unable to recover it. 00:34:14.833 [2024-07-26 01:16:44.945983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.833 [2024-07-26 01:16:44.946009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.833 qpair failed and we were unable to recover it. 00:34:14.833 [2024-07-26 01:16:44.946130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.833 [2024-07-26 01:16:44.946157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.833 qpair failed and we were unable to recover it. 00:34:14.833 [2024-07-26 01:16:44.946285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.833 [2024-07-26 01:16:44.946311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.833 qpair failed and we were unable to recover it. 00:34:14.833 [2024-07-26 01:16:44.946411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.833 [2024-07-26 01:16:44.946437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.833 qpair failed and we were unable to recover it. 00:34:14.833 [2024-07-26 01:16:44.946598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.833 [2024-07-26 01:16:44.946625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.833 qpair failed and we were unable to recover it. 00:34:14.833 [2024-07-26 01:16:44.946760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.833 [2024-07-26 01:16:44.946787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.833 qpair failed and we were unable to recover it. 00:34:14.833 [2024-07-26 01:16:44.946903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.833 [2024-07-26 01:16:44.946929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.833 qpair failed and we were unable to recover it. 00:34:14.833 [2024-07-26 01:16:44.947066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.833 [2024-07-26 01:16:44.947093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.833 qpair failed and we were unable to recover it. 00:34:14.833 [2024-07-26 01:16:44.947203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.833 [2024-07-26 01:16:44.947229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.833 qpair failed and we were unable to recover it. 00:34:14.833 [2024-07-26 01:16:44.947366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.833 [2024-07-26 01:16:44.947406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.833 qpair failed and we were unable to recover it. 00:34:14.833 [2024-07-26 01:16:44.947525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.833 [2024-07-26 01:16:44.947552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.833 qpair failed and we were unable to recover it. 00:34:14.833 [2024-07-26 01:16:44.947697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.833 [2024-07-26 01:16:44.947724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.833 qpair failed and we were unable to recover it. 00:34:14.833 [2024-07-26 01:16:44.947861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.833 [2024-07-26 01:16:44.947887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.833 qpair failed and we were unable to recover it. 00:34:14.833 [2024-07-26 01:16:44.948053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.833 [2024-07-26 01:16:44.948088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.833 qpair failed and we were unable to recover it. 00:34:14.833 [2024-07-26 01:16:44.948203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.833 [2024-07-26 01:16:44.948229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.833 qpair failed and we were unable to recover it. 00:34:14.833 [2024-07-26 01:16:44.948343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.833 [2024-07-26 01:16:44.948371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.834 qpair failed and we were unable to recover it. 00:34:14.834 [2024-07-26 01:16:44.948499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.834 [2024-07-26 01:16:44.948526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.834 qpair failed and we were unable to recover it. 00:34:14.834 [2024-07-26 01:16:44.948688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.834 [2024-07-26 01:16:44.948715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.834 qpair failed and we were unable to recover it. 00:34:14.834 [2024-07-26 01:16:44.948822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.834 [2024-07-26 01:16:44.948848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.834 qpair failed and we were unable to recover it. 00:34:14.834 [2024-07-26 01:16:44.948985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.834 [2024-07-26 01:16:44.949012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.834 qpair failed and we were unable to recover it. 00:34:14.834 [2024-07-26 01:16:44.949174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.834 [2024-07-26 01:16:44.949200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.834 qpair failed and we were unable to recover it. 00:34:14.834 [2024-07-26 01:16:44.949336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.834 [2024-07-26 01:16:44.949364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.834 qpair failed and we were unable to recover it. 00:34:14.834 [2024-07-26 01:16:44.949530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.834 [2024-07-26 01:16:44.949556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.834 qpair failed and we were unable to recover it. 00:34:14.834 [2024-07-26 01:16:44.949701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.834 [2024-07-26 01:16:44.949728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.834 qpair failed and we were unable to recover it. 00:34:14.834 [2024-07-26 01:16:44.949862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.834 [2024-07-26 01:16:44.949888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.834 qpair failed and we were unable to recover it. 00:34:14.834 [2024-07-26 01:16:44.950014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.834 [2024-07-26 01:16:44.950040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.834 qpair failed and we were unable to recover it. 00:34:14.834 [2024-07-26 01:16:44.950177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.834 [2024-07-26 01:16:44.950217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.834 qpair failed and we were unable to recover it. 00:34:14.834 [2024-07-26 01:16:44.950357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.834 [2024-07-26 01:16:44.950384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.834 qpair failed and we were unable to recover it. 00:34:14.834 [2024-07-26 01:16:44.950525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.834 [2024-07-26 01:16:44.950551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.834 qpair failed and we were unable to recover it. 00:34:14.834 [2024-07-26 01:16:44.950692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.834 [2024-07-26 01:16:44.950718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.834 qpair failed and we were unable to recover it. 00:34:14.834 [2024-07-26 01:16:44.950825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.834 [2024-07-26 01:16:44.950851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.834 qpair failed and we were unable to recover it. 00:34:14.834 [2024-07-26 01:16:44.950989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.834 [2024-07-26 01:16:44.951016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.834 qpair failed and we were unable to recover it. 00:34:14.834 [2024-07-26 01:16:44.951184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.834 [2024-07-26 01:16:44.951211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.834 qpair failed and we were unable to recover it. 00:34:14.834 [2024-07-26 01:16:44.951329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.834 [2024-07-26 01:16:44.951355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.834 qpair failed and we were unable to recover it. 00:34:14.834 [2024-07-26 01:16:44.951496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.834 [2024-07-26 01:16:44.951522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.834 qpair failed and we were unable to recover it. 00:34:14.834 [2024-07-26 01:16:44.951658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.834 [2024-07-26 01:16:44.951685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.834 qpair failed and we were unable to recover it. 00:34:14.834 [2024-07-26 01:16:44.951825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.834 [2024-07-26 01:16:44.951851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.834 qpair failed and we were unable to recover it. 00:34:14.834 [2024-07-26 01:16:44.951980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.834 [2024-07-26 01:16:44.952006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.834 qpair failed and we were unable to recover it. 00:34:14.834 [2024-07-26 01:16:44.952129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.834 [2024-07-26 01:16:44.952156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.834 qpair failed and we were unable to recover it. 00:34:14.834 [2024-07-26 01:16:44.952297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.834 [2024-07-26 01:16:44.952323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.834 qpair failed and we were unable to recover it. 00:34:14.834 [2024-07-26 01:16:44.952486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.834 [2024-07-26 01:16:44.952512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.834 qpair failed and we were unable to recover it. 00:34:14.834 [2024-07-26 01:16:44.952650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.834 [2024-07-26 01:16:44.952676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.834 qpair failed and we were unable to recover it. 00:34:14.834 [2024-07-26 01:16:44.952781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.834 [2024-07-26 01:16:44.952808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.834 qpair failed and we were unable to recover it. 00:34:14.834 [2024-07-26 01:16:44.952976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.834 [2024-07-26 01:16:44.953003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.834 qpair failed and we were unable to recover it. 00:34:14.834 [2024-07-26 01:16:44.953123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.834 [2024-07-26 01:16:44.953150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.834 qpair failed and we were unable to recover it. 00:34:14.834 [2024-07-26 01:16:44.953289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.834 [2024-07-26 01:16:44.953316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.834 qpair failed and we were unable to recover it. 00:34:14.834 [2024-07-26 01:16:44.953455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.834 [2024-07-26 01:16:44.953482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.834 qpair failed and we were unable to recover it. 00:34:14.834 [2024-07-26 01:16:44.953586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.834 [2024-07-26 01:16:44.953613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.834 qpair failed and we were unable to recover it. 00:34:14.834 [2024-07-26 01:16:44.953775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.834 [2024-07-26 01:16:44.953801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.834 qpair failed and we were unable to recover it. 00:34:14.834 [2024-07-26 01:16:44.953916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.834 [2024-07-26 01:16:44.953947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.834 qpair failed and we were unable to recover it. 00:34:14.834 [2024-07-26 01:16:44.954109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.834 [2024-07-26 01:16:44.954136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.834 qpair failed and we were unable to recover it. 00:34:14.834 [2024-07-26 01:16:44.954264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.834 [2024-07-26 01:16:44.954290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.834 qpair failed and we were unable to recover it. 00:34:14.834 [2024-07-26 01:16:44.954403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.834 [2024-07-26 01:16:44.954431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.834 qpair failed and we were unable to recover it. 00:34:14.834 [2024-07-26 01:16:44.954568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.835 [2024-07-26 01:16:44.954594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.835 qpair failed and we were unable to recover it. 00:34:14.835 [2024-07-26 01:16:44.954733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.835 [2024-07-26 01:16:44.954759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.835 qpair failed and we were unable to recover it. 00:34:14.835 [2024-07-26 01:16:44.954867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.835 [2024-07-26 01:16:44.954893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.835 qpair failed and we were unable to recover it. 00:34:14.835 [2024-07-26 01:16:44.955031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.835 [2024-07-26 01:16:44.955073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.835 qpair failed and we were unable to recover it. 00:34:14.835 [2024-07-26 01:16:44.955207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.835 [2024-07-26 01:16:44.955233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.835 qpair failed and we were unable to recover it. 00:34:14.835 [2024-07-26 01:16:44.955394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.835 [2024-07-26 01:16:44.955420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.835 qpair failed and we were unable to recover it. 00:34:14.835 [2024-07-26 01:16:44.955551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.835 [2024-07-26 01:16:44.955577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.835 qpair failed and we were unable to recover it. 00:34:14.835 [2024-07-26 01:16:44.955698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.835 [2024-07-26 01:16:44.955724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.835 qpair failed and we were unable to recover it. 00:34:14.835 [2024-07-26 01:16:44.955867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.835 [2024-07-26 01:16:44.955894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.835 qpair failed and we were unable to recover it. 00:34:14.835 [2024-07-26 01:16:44.956052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.835 [2024-07-26 01:16:44.956086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.835 qpair failed and we were unable to recover it. 00:34:14.835 [2024-07-26 01:16:44.956190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.835 [2024-07-26 01:16:44.956217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.835 qpair failed and we were unable to recover it. 00:34:14.835 [2024-07-26 01:16:44.956334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.835 [2024-07-26 01:16:44.956361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.835 qpair failed and we were unable to recover it. 00:34:14.835 [2024-07-26 01:16:44.956496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.835 [2024-07-26 01:16:44.956523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.835 qpair failed and we were unable to recover it. 00:34:14.835 [2024-07-26 01:16:44.956630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.835 [2024-07-26 01:16:44.956656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.835 qpair failed and we were unable to recover it. 00:34:14.835 [2024-07-26 01:16:44.956794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.835 [2024-07-26 01:16:44.956820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.835 qpair failed and we were unable to recover it. 00:34:14.835 [2024-07-26 01:16:44.956952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.835 [2024-07-26 01:16:44.956978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.835 qpair failed and we were unable to recover it. 00:34:14.835 [2024-07-26 01:16:44.957091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.835 [2024-07-26 01:16:44.957118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.835 qpair failed and we were unable to recover it. 00:34:14.835 [2024-07-26 01:16:44.957250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.835 [2024-07-26 01:16:44.957277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.835 qpair failed and we were unable to recover it. 00:34:14.835 [2024-07-26 01:16:44.957389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.835 [2024-07-26 01:16:44.957415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.835 qpair failed and we were unable to recover it. 00:34:14.835 [2024-07-26 01:16:44.957547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.835 [2024-07-26 01:16:44.957573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.835 qpair failed and we were unable to recover it. 00:34:14.835 [2024-07-26 01:16:44.957706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.835 [2024-07-26 01:16:44.957733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.835 qpair failed and we were unable to recover it. 00:34:14.835 [2024-07-26 01:16:44.957848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.835 [2024-07-26 01:16:44.957874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.835 qpair failed and we were unable to recover it. 00:34:14.835 [2024-07-26 01:16:44.958023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.835 [2024-07-26 01:16:44.958049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.835 qpair failed and we were unable to recover it. 00:34:14.835 [2024-07-26 01:16:44.958202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.835 [2024-07-26 01:16:44.958229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.835 qpair failed and we were unable to recover it. 00:34:14.835 [2024-07-26 01:16:44.958364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.835 [2024-07-26 01:16:44.958390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.835 qpair failed and we were unable to recover it. 00:34:14.835 [2024-07-26 01:16:44.958527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.835 [2024-07-26 01:16:44.958553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.835 qpair failed and we were unable to recover it. 00:34:14.835 [2024-07-26 01:16:44.958670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.835 [2024-07-26 01:16:44.958696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.835 qpair failed and we were unable to recover it. 00:34:14.835 [2024-07-26 01:16:44.958808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.835 [2024-07-26 01:16:44.958834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.835 qpair failed and we were unable to recover it. 00:34:14.835 [2024-07-26 01:16:44.958993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.835 [2024-07-26 01:16:44.959019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.835 qpair failed and we were unable to recover it. 00:34:14.835 [2024-07-26 01:16:44.959182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.835 [2024-07-26 01:16:44.959209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.835 qpair failed and we were unable to recover it. 00:34:14.835 [2024-07-26 01:16:44.959344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.835 [2024-07-26 01:16:44.959371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.835 qpair failed and we were unable to recover it. 00:34:14.835 [2024-07-26 01:16:44.959540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.835 [2024-07-26 01:16:44.959566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.835 qpair failed and we were unable to recover it. 00:34:14.835 [2024-07-26 01:16:44.959733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.835 [2024-07-26 01:16:44.959760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.835 qpair failed and we were unable to recover it. 00:34:14.835 [2024-07-26 01:16:44.959893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.835 [2024-07-26 01:16:44.959920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.835 qpair failed and we were unable to recover it. 00:34:14.835 [2024-07-26 01:16:44.960053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.835 [2024-07-26 01:16:44.960087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.835 qpair failed and we were unable to recover it. 00:34:14.835 [2024-07-26 01:16:44.960253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.835 [2024-07-26 01:16:44.960280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.835 qpair failed and we were unable to recover it. 00:34:14.835 [2024-07-26 01:16:44.960416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.835 [2024-07-26 01:16:44.960446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.835 qpair failed and we were unable to recover it. 00:34:14.835 [2024-07-26 01:16:44.960553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.835 [2024-07-26 01:16:44.960579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.835 qpair failed and we were unable to recover it. 00:34:14.836 [2024-07-26 01:16:44.960691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.836 [2024-07-26 01:16:44.960718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.836 qpair failed and we were unable to recover it. 00:34:14.836 [2024-07-26 01:16:44.960854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.836 [2024-07-26 01:16:44.960880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.836 qpair failed and we were unable to recover it. 00:34:14.836 [2024-07-26 01:16:44.961045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.836 [2024-07-26 01:16:44.961078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.836 qpair failed and we were unable to recover it. 00:34:14.836 [2024-07-26 01:16:44.961209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.836 [2024-07-26 01:16:44.961235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.836 qpair failed and we were unable to recover it. 00:34:14.836 [2024-07-26 01:16:44.961380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.836 [2024-07-26 01:16:44.961412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.836 qpair failed and we were unable to recover it. 00:34:14.836 [2024-07-26 01:16:44.961530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.836 [2024-07-26 01:16:44.961556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.836 qpair failed and we were unable to recover it. 00:34:14.836 [2024-07-26 01:16:44.961670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.836 [2024-07-26 01:16:44.961696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.836 qpair failed and we were unable to recover it. 00:34:14.836 [2024-07-26 01:16:44.961821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.836 [2024-07-26 01:16:44.961848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.836 qpair failed and we were unable to recover it. 00:34:14.836 [2024-07-26 01:16:44.961987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.836 [2024-07-26 01:16:44.962013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.836 qpair failed and we were unable to recover it. 00:34:14.836 [2024-07-26 01:16:44.962140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.836 [2024-07-26 01:16:44.962167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.836 qpair failed and we were unable to recover it. 00:34:14.836 [2024-07-26 01:16:44.962287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.836 [2024-07-26 01:16:44.962313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.836 qpair failed and we were unable to recover it. 00:34:14.836 [2024-07-26 01:16:44.962462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.836 [2024-07-26 01:16:44.962487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.836 qpair failed and we were unable to recover it. 00:34:14.836 [2024-07-26 01:16:44.962606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.836 [2024-07-26 01:16:44.962632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.836 qpair failed and we were unable to recover it. 00:34:14.836 [2024-07-26 01:16:44.962740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.836 [2024-07-26 01:16:44.962767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.836 qpair failed and we were unable to recover it. 00:34:14.836 [2024-07-26 01:16:44.962881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.836 [2024-07-26 01:16:44.962907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.836 qpair failed and we were unable to recover it. 00:34:14.836 [2024-07-26 01:16:44.963042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.836 [2024-07-26 01:16:44.963077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.836 qpair failed and we were unable to recover it. 00:34:14.836 [2024-07-26 01:16:44.963214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.836 [2024-07-26 01:16:44.963241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.836 qpair failed and we were unable to recover it. 00:34:14.836 [2024-07-26 01:16:44.963377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.836 [2024-07-26 01:16:44.963403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.836 qpair failed and we were unable to recover it. 00:34:14.836 [2024-07-26 01:16:44.963508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.836 [2024-07-26 01:16:44.963535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.836 qpair failed and we were unable to recover it. 00:34:14.836 [2024-07-26 01:16:44.963669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.836 [2024-07-26 01:16:44.963694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.836 qpair failed and we were unable to recover it. 00:34:14.836 [2024-07-26 01:16:44.963831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.836 [2024-07-26 01:16:44.963857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.836 qpair failed and we were unable to recover it. 00:34:14.836 [2024-07-26 01:16:44.963994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.836 [2024-07-26 01:16:44.964020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.836 qpair failed and we were unable to recover it. 00:34:14.836 [2024-07-26 01:16:44.964187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.836 [2024-07-26 01:16:44.964214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.836 qpair failed and we were unable to recover it. 00:34:14.836 [2024-07-26 01:16:44.964345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.836 [2024-07-26 01:16:44.964370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.836 qpair failed and we were unable to recover it. 00:34:14.836 [2024-07-26 01:16:44.964513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.836 [2024-07-26 01:16:44.964539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.836 qpair failed and we were unable to recover it. 00:34:14.836 [2024-07-26 01:16:44.964650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.836 [2024-07-26 01:16:44.964676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.836 qpair failed and we were unable to recover it. 00:34:14.836 [2024-07-26 01:16:44.964776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.836 [2024-07-26 01:16:44.964801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.836 qpair failed and we were unable to recover it. 00:34:14.836 [2024-07-26 01:16:44.964906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.836 [2024-07-26 01:16:44.964931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.836 qpair failed and we were unable to recover it. 00:34:14.836 [2024-07-26 01:16:44.965042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.836 [2024-07-26 01:16:44.965077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.836 qpair failed and we were unable to recover it. 00:34:14.836 [2024-07-26 01:16:44.965211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.836 [2024-07-26 01:16:44.965237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.836 qpair failed and we were unable to recover it. 00:34:14.836 [2024-07-26 01:16:44.965400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.836 [2024-07-26 01:16:44.965427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.836 qpair failed and we were unable to recover it. 00:34:14.836 [2024-07-26 01:16:44.965558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.836 [2024-07-26 01:16:44.965584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.836 qpair failed and we were unable to recover it. 00:34:14.836 [2024-07-26 01:16:44.965744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.836 [2024-07-26 01:16:44.965770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.837 qpair failed and we were unable to recover it. 00:34:14.837 [2024-07-26 01:16:44.965886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.837 [2024-07-26 01:16:44.965912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.837 qpair failed and we were unable to recover it. 00:34:14.837 [2024-07-26 01:16:44.966032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.837 [2024-07-26 01:16:44.966057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.837 qpair failed and we were unable to recover it. 00:34:14.837 [2024-07-26 01:16:44.966251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.837 [2024-07-26 01:16:44.966277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.837 qpair failed and we were unable to recover it. 00:34:14.837 [2024-07-26 01:16:44.966385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.837 [2024-07-26 01:16:44.966411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.837 qpair failed and we were unable to recover it. 00:34:14.837 [2024-07-26 01:16:44.966516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.837 [2024-07-26 01:16:44.966542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.837 qpair failed and we were unable to recover it. 00:34:14.837 [2024-07-26 01:16:44.966638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.837 [2024-07-26 01:16:44.966669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.837 qpair failed and we were unable to recover it. 00:34:14.837 [2024-07-26 01:16:44.966809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.837 [2024-07-26 01:16:44.966835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.837 qpair failed and we were unable to recover it. 00:34:14.837 [2024-07-26 01:16:44.967002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.837 [2024-07-26 01:16:44.967028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.837 qpair failed and we were unable to recover it. 00:34:14.837 [2024-07-26 01:16:44.967143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.837 [2024-07-26 01:16:44.967170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.837 qpair failed and we were unable to recover it. 00:34:14.837 [2024-07-26 01:16:44.967280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.837 [2024-07-26 01:16:44.967307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.837 qpair failed and we were unable to recover it. 00:34:14.837 [2024-07-26 01:16:44.967443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.837 [2024-07-26 01:16:44.967470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.837 qpair failed and we were unable to recover it. 00:34:14.837 [2024-07-26 01:16:44.967606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.837 [2024-07-26 01:16:44.967633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.837 qpair failed and we were unable to recover it. 00:34:14.837 [2024-07-26 01:16:44.967772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.837 [2024-07-26 01:16:44.967799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.837 qpair failed and we were unable to recover it. 00:34:14.837 [2024-07-26 01:16:44.967901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.837 [2024-07-26 01:16:44.967927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.837 qpair failed and we were unable to recover it. 00:34:14.837 [2024-07-26 01:16:44.968036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.837 [2024-07-26 01:16:44.968069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.837 qpair failed and we were unable to recover it. 00:34:14.837 [2024-07-26 01:16:44.968209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.837 [2024-07-26 01:16:44.968236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.837 qpair failed and we were unable to recover it. 00:34:14.837 [2024-07-26 01:16:44.968372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.837 [2024-07-26 01:16:44.968399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.837 qpair failed and we were unable to recover it. 00:34:14.837 [2024-07-26 01:16:44.968514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.837 [2024-07-26 01:16:44.968540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.837 qpair failed and we were unable to recover it. 00:34:14.837 [2024-07-26 01:16:44.968652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.837 [2024-07-26 01:16:44.968679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.837 qpair failed and we were unable to recover it. 00:34:14.837 [2024-07-26 01:16:44.968812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.837 [2024-07-26 01:16:44.968838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.837 qpair failed and we were unable to recover it. 00:34:14.837 [2024-07-26 01:16:44.968948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.837 [2024-07-26 01:16:44.968975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.837 qpair failed and we were unable to recover it. 00:34:14.837 [2024-07-26 01:16:44.969074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.837 [2024-07-26 01:16:44.969102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.837 qpair failed and we were unable to recover it. 00:34:14.837 [2024-07-26 01:16:44.969240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.837 [2024-07-26 01:16:44.969266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.837 qpair failed and we were unable to recover it. 00:34:14.837 [2024-07-26 01:16:44.969378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.837 [2024-07-26 01:16:44.969404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.837 qpair failed and we were unable to recover it. 00:34:14.837 [2024-07-26 01:16:44.969543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.837 [2024-07-26 01:16:44.969569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.837 qpair failed and we were unable to recover it. 00:34:14.837 [2024-07-26 01:16:44.969602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:14.837 [2024-07-26 01:16:44.969702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.837 [2024-07-26 01:16:44.969729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.837 qpair failed and we were unable to recover it. 00:34:14.837 [2024-07-26 01:16:44.969835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.837 [2024-07-26 01:16:44.969862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.837 qpair failed and we were unable to recover it. 00:34:14.837 [2024-07-26 01:16:44.969973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.837 [2024-07-26 01:16:44.970000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.837 qpair failed and we were unable to recover it. 00:34:14.837 [2024-07-26 01:16:44.970117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.837 [2024-07-26 01:16:44.970144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.837 qpair failed and we were unable to recover it. 00:34:14.837 [2024-07-26 01:16:44.970280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.837 [2024-07-26 01:16:44.970305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.837 qpair failed and we were unable to recover it. 00:34:14.837 [2024-07-26 01:16:44.970436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.837 [2024-07-26 01:16:44.970463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.837 qpair failed and we were unable to recover it. 00:34:14.837 [2024-07-26 01:16:44.970597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.837 [2024-07-26 01:16:44.970623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.837 qpair failed and we were unable to recover it. 00:34:14.837 [2024-07-26 01:16:44.970733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.837 [2024-07-26 01:16:44.970760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.837 qpair failed and we were unable to recover it. 00:34:14.837 [2024-07-26 01:16:44.970928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.837 [2024-07-26 01:16:44.970954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.837 qpair failed and we were unable to recover it. 00:34:14.837 [2024-07-26 01:16:44.971092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.837 [2024-07-26 01:16:44.971118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.837 qpair failed and we were unable to recover it. 00:34:14.837 [2024-07-26 01:16:44.971226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.837 [2024-07-26 01:16:44.971251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.837 qpair failed and we were unable to recover it. 00:34:14.837 [2024-07-26 01:16:44.971414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.837 [2024-07-26 01:16:44.971440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.837 qpair failed and we were unable to recover it. 00:34:14.838 [2024-07-26 01:16:44.971541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.838 [2024-07-26 01:16:44.971567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.838 qpair failed and we were unable to recover it. 00:34:14.838 [2024-07-26 01:16:44.971702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.838 [2024-07-26 01:16:44.971729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.838 qpair failed and we were unable to recover it. 00:34:14.838 [2024-07-26 01:16:44.971861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.838 [2024-07-26 01:16:44.971888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.838 qpair failed and we were unable to recover it. 00:34:14.838 [2024-07-26 01:16:44.972026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.838 [2024-07-26 01:16:44.972053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.838 qpair failed and we were unable to recover it. 00:34:14.838 [2024-07-26 01:16:44.972196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.838 [2024-07-26 01:16:44.972222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.838 qpair failed and we were unable to recover it. 00:34:14.838 [2024-07-26 01:16:44.972356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.838 [2024-07-26 01:16:44.972382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.838 qpair failed and we were unable to recover it. 00:34:14.838 [2024-07-26 01:16:44.972488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.838 [2024-07-26 01:16:44.972515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.838 qpair failed and we were unable to recover it. 00:34:14.838 [2024-07-26 01:16:44.972651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.838 [2024-07-26 01:16:44.972677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.838 qpair failed and we were unable to recover it. 00:34:14.838 [2024-07-26 01:16:44.972873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.838 [2024-07-26 01:16:44.972911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.838 qpair failed and we were unable to recover it. 00:34:14.838 [2024-07-26 01:16:44.973067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.838 [2024-07-26 01:16:44.973100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.838 qpair failed and we were unable to recover it. 00:34:14.838 [2024-07-26 01:16:44.973221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.838 [2024-07-26 01:16:44.973252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.838 qpair failed and we were unable to recover it. 00:34:14.838 [2024-07-26 01:16:44.973430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.838 [2024-07-26 01:16:44.973462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.838 qpair failed and we were unable to recover it. 00:34:14.838 [2024-07-26 01:16:44.975075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.838 [2024-07-26 01:16:44.975119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.838 qpair failed and we were unable to recover it. 00:34:14.838 [2024-07-26 01:16:44.975298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.838 [2024-07-26 01:16:44.975329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.838 qpair failed and we were unable to recover it. 00:34:14.838 [2024-07-26 01:16:44.975492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.838 [2024-07-26 01:16:44.975522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.838 qpair failed and we were unable to recover it. 00:34:14.838 [2024-07-26 01:16:44.975645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.838 [2024-07-26 01:16:44.975676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.838 qpair failed and we were unable to recover it. 00:34:14.838 [2024-07-26 01:16:44.975833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.838 [2024-07-26 01:16:44.975864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.838 qpair failed and we were unable to recover it. 00:34:14.838 [2024-07-26 01:16:44.976037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.838 [2024-07-26 01:16:44.976076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.838 qpair failed and we were unable to recover it. 00:34:14.838 [2024-07-26 01:16:44.976207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.838 [2024-07-26 01:16:44.976236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.838 qpair failed and we were unable to recover it. 00:34:14.838 [2024-07-26 01:16:44.976389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.838 [2024-07-26 01:16:44.976421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.838 qpair failed and we were unable to recover it. 00:34:14.838 [2024-07-26 01:16:44.976573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.838 [2024-07-26 01:16:44.976603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.838 qpair failed and we were unable to recover it. 00:34:14.838 [2024-07-26 01:16:44.976754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.838 [2024-07-26 01:16:44.976789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.838 qpair failed and we were unable to recover it. 00:34:14.838 [2024-07-26 01:16:44.976916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.838 [2024-07-26 01:16:44.976945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.838 qpair failed and we were unable to recover it. 00:34:14.838 [2024-07-26 01:16:44.977117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.838 [2024-07-26 01:16:44.977148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.838 qpair failed and we were unable to recover it. 00:34:14.838 [2024-07-26 01:16:44.977270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.838 [2024-07-26 01:16:44.977298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.838 qpair failed and we were unable to recover it. 00:34:14.838 [2024-07-26 01:16:44.977439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.838 [2024-07-26 01:16:44.977479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.838 qpair failed and we were unable to recover it. 00:34:14.838 [2024-07-26 01:16:44.977618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.838 [2024-07-26 01:16:44.977648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.838 qpair failed and we were unable to recover it. 00:34:14.838 [2024-07-26 01:16:44.980072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.838 [2024-07-26 01:16:44.980117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.838 qpair failed and we were unable to recover it. 00:34:14.838 [2024-07-26 01:16:44.980308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.838 [2024-07-26 01:16:44.980339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.838 qpair failed and we were unable to recover it. 00:34:14.838 [2024-07-26 01:16:44.980472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.838 [2024-07-26 01:16:44.980504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.838 qpair failed and we were unable to recover it. 00:34:14.838 [2024-07-26 01:16:44.980675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.838 [2024-07-26 01:16:44.980707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.838 qpair failed and we were unable to recover it. 00:34:14.838 [2024-07-26 01:16:44.980898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.838 [2024-07-26 01:16:44.980930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.838 qpair failed and we were unable to recover it. 00:34:14.838 [2024-07-26 01:16:44.981089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.838 [2024-07-26 01:16:44.981121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.838 qpair failed and we were unable to recover it. 00:34:14.838 [2024-07-26 01:16:44.981248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.838 [2024-07-26 01:16:44.981277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.838 qpair failed and we were unable to recover it. 00:34:14.838 [2024-07-26 01:16:44.981426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.838 [2024-07-26 01:16:44.981458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.838 qpair failed and we were unable to recover it. 00:34:14.838 [2024-07-26 01:16:44.981677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.838 [2024-07-26 01:16:44.981709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.838 qpair failed and we were unable to recover it. 00:34:14.838 [2024-07-26 01:16:44.981837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.838 [2024-07-26 01:16:44.981867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.838 qpair failed and we were unable to recover it. 00:34:14.838 [2024-07-26 01:16:44.982024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.839 [2024-07-26 01:16:44.982053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.839 qpair failed and we were unable to recover it. 00:34:14.839 [2024-07-26 01:16:44.982219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.839 [2024-07-26 01:16:44.982249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.839 qpair failed and we were unable to recover it. 00:34:14.839 [2024-07-26 01:16:44.982403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.839 [2024-07-26 01:16:44.982433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.839 qpair failed and we were unable to recover it. 00:34:14.839 [2024-07-26 01:16:44.982639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.839 [2024-07-26 01:16:44.982670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.839 qpair failed and we were unable to recover it. 00:34:14.839 [2024-07-26 01:16:44.982788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.839 [2024-07-26 01:16:44.982818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.839 qpair failed and we were unable to recover it. 00:34:14.839 [2024-07-26 01:16:44.982951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.839 [2024-07-26 01:16:44.982980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.839 qpair failed and we were unable to recover it. 00:34:14.839 [2024-07-26 01:16:44.983174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.839 [2024-07-26 01:16:44.983206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.839 qpair failed and we were unable to recover it. 00:34:14.839 [2024-07-26 01:16:44.983364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.839 [2024-07-26 01:16:44.983394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.839 qpair failed and we were unable to recover it. 00:34:14.839 [2024-07-26 01:16:44.983517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.839 [2024-07-26 01:16:44.983548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.839 qpair failed and we were unable to recover it. 00:34:14.839 [2024-07-26 01:16:44.983692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.839 [2024-07-26 01:16:44.983722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.839 qpair failed and we were unable to recover it. 00:34:14.839 [2024-07-26 01:16:44.983867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.839 [2024-07-26 01:16:44.983897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.839 qpair failed and we were unable to recover it. 00:34:14.839 [2024-07-26 01:16:44.984024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.839 [2024-07-26 01:16:44.984053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.839 qpair failed and we were unable to recover it. 00:34:14.839 [2024-07-26 01:16:44.984215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.839 [2024-07-26 01:16:44.984244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.839 qpair failed and we were unable to recover it. 00:34:14.839 [2024-07-26 01:16:44.986088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.839 [2024-07-26 01:16:44.986136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.839 qpair failed and we were unable to recover it. 00:34:14.839 [2024-07-26 01:16:44.986296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.839 [2024-07-26 01:16:44.986328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.839 qpair failed and we were unable to recover it. 00:34:14.839 [2024-07-26 01:16:44.986488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.839 [2024-07-26 01:16:44.986520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.839 qpair failed and we were unable to recover it. 00:34:14.839 [2024-07-26 01:16:44.986676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.839 [2024-07-26 01:16:44.986708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.839 qpair failed and we were unable to recover it. 00:34:14.839 [2024-07-26 01:16:44.986868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.839 [2024-07-26 01:16:44.986899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.839 qpair failed and we were unable to recover it. 00:34:14.839 [2024-07-26 01:16:44.987025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.839 [2024-07-26 01:16:44.987055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.839 qpair failed and we were unable to recover it. 00:34:14.839 [2024-07-26 01:16:44.987182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.839 [2024-07-26 01:16:44.987210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.839 qpair failed and we were unable to recover it. 00:34:14.839 [2024-07-26 01:16:44.987326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.839 [2024-07-26 01:16:44.987364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.839 qpair failed and we were unable to recover it. 00:34:14.839 [2024-07-26 01:16:44.987513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.839 [2024-07-26 01:16:44.987543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.839 qpair failed and we were unable to recover it. 00:34:14.839 [2024-07-26 01:16:44.987692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.839 [2024-07-26 01:16:44.987723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.839 qpair failed and we were unable to recover it. 00:34:14.839 [2024-07-26 01:16:44.987877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.839 [2024-07-26 01:16:44.987909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.839 qpair failed and we were unable to recover it. 00:34:14.839 [2024-07-26 01:16:44.988038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.839 [2024-07-26 01:16:44.988084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.839 qpair failed and we were unable to recover it. 00:34:14.839 [2024-07-26 01:16:44.988227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.839 [2024-07-26 01:16:44.988257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.839 qpair failed and we were unable to recover it. 00:34:14.839 [2024-07-26 01:16:44.988444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.839 [2024-07-26 01:16:44.988476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.839 qpair failed and we were unable to recover it. 00:34:14.839 [2024-07-26 01:16:44.988596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.839 [2024-07-26 01:16:44.988627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.839 qpair failed and we were unable to recover it. 00:34:14.839 [2024-07-26 01:16:44.988785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.839 [2024-07-26 01:16:44.988817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.839 qpair failed and we were unable to recover it. 00:34:14.839 [2024-07-26 01:16:44.988966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.839 [2024-07-26 01:16:44.988994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.839 qpair failed and we were unable to recover it. 00:34:14.839 [2024-07-26 01:16:44.991072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.839 [2024-07-26 01:16:44.991131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.839 qpair failed and we were unable to recover it. 00:34:14.839 [2024-07-26 01:16:44.991344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.839 [2024-07-26 01:16:44.991379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.839 qpair failed and we were unable to recover it. 00:34:14.839 [2024-07-26 01:16:44.991557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.839 [2024-07-26 01:16:44.991590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.839 qpair failed and we were unable to recover it. 00:34:14.839 [2024-07-26 01:16:44.991770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.839 [2024-07-26 01:16:44.991802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.839 qpair failed and we were unable to recover it. 00:34:14.839 [2024-07-26 01:16:44.991954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.839 [2024-07-26 01:16:44.991986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.839 qpair failed and we were unable to recover it. 00:34:14.839 [2024-07-26 01:16:44.992162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.839 [2024-07-26 01:16:44.992194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.839 qpair failed and we were unable to recover it. 00:34:14.839 [2024-07-26 01:16:44.992371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.839 [2024-07-26 01:16:44.992403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.839 qpair failed and we were unable to recover it. 00:34:14.839 [2024-07-26 01:16:44.992576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-26 01:16:44.992606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.840 qpair failed and we were unable to recover it. 00:34:14.840 [2024-07-26 01:16:44.992761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-26 01:16:44.992792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.840 qpair failed and we were unable to recover it. 00:34:14.840 [2024-07-26 01:16:44.993025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-26 01:16:44.993056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.840 qpair failed and we were unable to recover it. 00:34:14.840 [2024-07-26 01:16:44.993234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-26 01:16:44.993262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.840 qpair failed and we were unable to recover it. 00:34:14.840 [2024-07-26 01:16:44.993386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-26 01:16:44.993415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.840 qpair failed and we were unable to recover it. 00:34:14.840 [2024-07-26 01:16:44.993567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-26 01:16:44.993597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.840 qpair failed and we were unable to recover it. 00:34:14.840 [2024-07-26 01:16:44.993742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-26 01:16:44.993772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.840 qpair failed and we were unable to recover it. 00:34:14.840 [2024-07-26 01:16:44.993950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-26 01:16:44.993980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.840 qpair failed and we were unable to recover it. 00:34:14.840 [2024-07-26 01:16:44.994151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-26 01:16:44.994181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.840 qpair failed and we were unable to recover it. 00:34:14.840 [2024-07-26 01:16:44.994318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-26 01:16:44.994355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.840 qpair failed and we were unable to recover it. 00:34:14.840 [2024-07-26 01:16:44.994522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-26 01:16:44.994552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.840 qpair failed and we were unable to recover it. 00:34:14.840 [2024-07-26 01:16:44.994701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-26 01:16:44.994729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.840 qpair failed and we were unable to recover it. 00:34:14.840 [2024-07-26 01:16:44.994873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-26 01:16:44.994901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.840 qpair failed and we were unable to recover it. 00:34:14.840 [2024-07-26 01:16:44.997072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-26 01:16:44.997130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.840 qpair failed and we were unable to recover it. 00:34:14.840 [2024-07-26 01:16:44.997333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-26 01:16:44.997375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.840 qpair failed and we were unable to recover it. 00:34:14.840 [2024-07-26 01:16:44.997537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-26 01:16:44.997565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.840 qpair failed and we were unable to recover it. 00:34:14.840 [2024-07-26 01:16:44.997728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-26 01:16:44.997755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.840 qpair failed and we were unable to recover it. 00:34:14.840 [2024-07-26 01:16:44.997872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-26 01:16:44.997900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.840 qpair failed and we were unable to recover it. 00:34:14.840 [2024-07-26 01:16:44.998032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-26 01:16:44.998066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.840 qpair failed and we were unable to recover it. 00:34:14.840 [2024-07-26 01:16:44.998188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-26 01:16:44.998214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.840 qpair failed and we were unable to recover it. 00:34:14.840 [2024-07-26 01:16:44.998327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-26 01:16:44.998360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.840 qpair failed and we were unable to recover it. 00:34:14.840 [2024-07-26 01:16:44.998522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-26 01:16:44.998549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.840 qpair failed and we were unable to recover it. 00:34:14.840 [2024-07-26 01:16:44.998665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-26 01:16:44.998692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.840 qpair failed and we were unable to recover it. 00:34:14.840 [2024-07-26 01:16:44.998821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-26 01:16:44.998848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.840 qpair failed and we were unable to recover it. 00:34:14.840 [2024-07-26 01:16:44.998985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-26 01:16:44.999011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.840 qpair failed and we were unable to recover it. 00:34:14.840 [2024-07-26 01:16:44.999117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-26 01:16:44.999144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.840 qpair failed and we were unable to recover it. 00:34:14.840 [2024-07-26 01:16:44.999281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-26 01:16:44.999308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.840 qpair failed and we were unable to recover it. 00:34:14.840 [2024-07-26 01:16:44.999440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-26 01:16:44.999467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.840 qpair failed and we were unable to recover it. 00:34:14.840 [2024-07-26 01:16:44.999579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-26 01:16:44.999606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.840 qpair failed and we were unable to recover it. 00:34:14.840 [2024-07-26 01:16:44.999771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-26 01:16:44.999798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.840 qpair failed and we were unable to recover it. 00:34:14.840 [2024-07-26 01:16:44.999961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-26 01:16:44.999988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.840 qpair failed and we were unable to recover it. 00:34:14.840 [2024-07-26 01:16:45.000124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-26 01:16:45.000151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.840 qpair failed and we were unable to recover it. 00:34:14.840 [2024-07-26 01:16:45.000268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-26 01:16:45.000294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.840 qpair failed and we were unable to recover it. 00:34:14.840 [2024-07-26 01:16:45.000463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-26 01:16:45.000491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.840 qpair failed and we were unable to recover it. 00:34:14.840 [2024-07-26 01:16:45.000626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-26 01:16:45.000653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.840 qpair failed and we were unable to recover it. 00:34:14.840 [2024-07-26 01:16:45.000766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-26 01:16:45.000793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.840 qpair failed and we were unable to recover it. 00:34:14.840 [2024-07-26 01:16:45.000927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-26 01:16:45.000954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.840 qpair failed and we were unable to recover it. 00:34:14.840 [2024-07-26 01:16:45.001072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.841 [2024-07-26 01:16:45.001111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.841 qpair failed and we were unable to recover it. 00:34:14.841 [2024-07-26 01:16:45.001249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.841 [2024-07-26 01:16:45.001275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.841 qpair failed and we were unable to recover it. 00:34:14.841 [2024-07-26 01:16:45.001391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.841 [2024-07-26 01:16:45.001417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.841 qpair failed and we were unable to recover it. 00:34:14.841 [2024-07-26 01:16:45.001529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.841 [2024-07-26 01:16:45.001556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.841 qpair failed and we were unable to recover it. 00:34:14.841 [2024-07-26 01:16:45.001697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.841 [2024-07-26 01:16:45.001728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.841 qpair failed and we were unable to recover it. 00:34:14.841 [2024-07-26 01:16:45.001861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.841 [2024-07-26 01:16:45.001888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.841 qpair failed and we were unable to recover it. 00:34:14.841 [2024-07-26 01:16:45.002050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.841 [2024-07-26 01:16:45.002082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.841 qpair failed and we were unable to recover it. 00:34:14.841 [2024-07-26 01:16:45.002227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.841 [2024-07-26 01:16:45.002253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.841 qpair failed and we were unable to recover it. 00:34:14.841 [2024-07-26 01:16:45.002399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.841 [2024-07-26 01:16:45.002426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.841 qpair failed and we were unable to recover it. 00:34:14.841 [2024-07-26 01:16:45.002534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.841 [2024-07-26 01:16:45.002560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.841 qpair failed and we were unable to recover it. 00:34:14.841 [2024-07-26 01:16:45.002678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.841 [2024-07-26 01:16:45.002705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.841 qpair failed and we were unable to recover it. 00:34:14.841 [2024-07-26 01:16:45.002837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.841 [2024-07-26 01:16:45.002863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.841 qpair failed and we were unable to recover it. 00:34:14.841 [2024-07-26 01:16:45.002970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.841 [2024-07-26 01:16:45.002998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.841 qpair failed and we were unable to recover it. 00:34:14.841 [2024-07-26 01:16:45.003168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.841 [2024-07-26 01:16:45.003197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.841 qpair failed and we were unable to recover it. 00:34:14.841 [2024-07-26 01:16:45.003336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.841 [2024-07-26 01:16:45.003364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.841 qpair failed and we were unable to recover it. 00:34:14.841 [2024-07-26 01:16:45.003508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.841 [2024-07-26 01:16:45.003535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.841 qpair failed and we were unable to recover it. 00:34:14.841 [2024-07-26 01:16:45.003672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.841 [2024-07-26 01:16:45.003700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.841 qpair failed and we were unable to recover it. 00:34:14.841 [2024-07-26 01:16:45.003832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.841 [2024-07-26 01:16:45.003859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.841 qpair failed and we were unable to recover it. 00:34:14.841 [2024-07-26 01:16:45.003972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.841 [2024-07-26 01:16:45.003999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.841 qpair failed and we were unable to recover it. 00:34:14.841 [2024-07-26 01:16:45.004133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.841 [2024-07-26 01:16:45.004161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.841 qpair failed and we were unable to recover it. 00:34:14.841 [2024-07-26 01:16:45.004302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.841 [2024-07-26 01:16:45.004329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.841 qpair failed and we were unable to recover it. 00:34:14.841 [2024-07-26 01:16:45.004465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.841 [2024-07-26 01:16:45.004491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.841 qpair failed and we were unable to recover it. 00:34:14.841 [2024-07-26 01:16:45.004633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.841 [2024-07-26 01:16:45.004660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.841 qpair failed and we were unable to recover it. 00:34:14.841 [2024-07-26 01:16:45.004764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.841 [2024-07-26 01:16:45.004791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.841 qpair failed and we were unable to recover it. 00:34:14.841 [2024-07-26 01:16:45.004891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.841 [2024-07-26 01:16:45.004918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.841 qpair failed and we were unable to recover it. 00:34:14.841 [2024-07-26 01:16:45.005056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.841 [2024-07-26 01:16:45.005090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.841 qpair failed and we were unable to recover it. 00:34:14.841 [2024-07-26 01:16:45.005227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.841 [2024-07-26 01:16:45.005254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.841 qpair failed and we were unable to recover it. 00:34:14.841 [2024-07-26 01:16:45.005391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.841 [2024-07-26 01:16:45.005418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.841 qpair failed and we were unable to recover it. 00:34:14.841 [2024-07-26 01:16:45.005521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.841 [2024-07-26 01:16:45.005549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.841 qpair failed and we were unable to recover it. 00:34:14.841 [2024-07-26 01:16:45.005651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.841 [2024-07-26 01:16:45.005678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.841 qpair failed and we were unable to recover it. 00:34:14.841 [2024-07-26 01:16:45.005798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.841 [2024-07-26 01:16:45.005825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.841 qpair failed and we were unable to recover it. 00:34:14.841 [2024-07-26 01:16:45.005974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.841 [2024-07-26 01:16:45.006002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.841 qpair failed and we were unable to recover it. 00:34:14.841 [2024-07-26 01:16:45.006132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.841 [2024-07-26 01:16:45.006159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.841 qpair failed and we were unable to recover it. 00:34:14.841 [2024-07-26 01:16:45.006272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.841 [2024-07-26 01:16:45.006299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.841 qpair failed and we were unable to recover it. 00:34:14.841 [2024-07-26 01:16:45.006438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.841 [2024-07-26 01:16:45.006465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.842 qpair failed and we were unable to recover it. 00:34:14.842 [2024-07-26 01:16:45.006600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.842 [2024-07-26 01:16:45.006626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.842 qpair failed and we were unable to recover it. 00:34:14.842 [2024-07-26 01:16:45.006756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.842 [2024-07-26 01:16:45.006783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.842 qpair failed and we were unable to recover it. 00:34:14.842 [2024-07-26 01:16:45.006955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.842 [2024-07-26 01:16:45.006982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.842 qpair failed and we were unable to recover it. 00:34:14.842 [2024-07-26 01:16:45.007115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.842 [2024-07-26 01:16:45.007143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.842 qpair failed and we were unable to recover it. 00:34:14.842 [2024-07-26 01:16:45.007277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.842 [2024-07-26 01:16:45.007304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.842 qpair failed and we were unable to recover it. 00:34:14.842 [2024-07-26 01:16:45.007411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.842 [2024-07-26 01:16:45.007438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.842 qpair failed and we were unable to recover it. 00:34:14.842 [2024-07-26 01:16:45.007568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.842 [2024-07-26 01:16:45.007595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.842 qpair failed and we were unable to recover it. 00:34:14.842 [2024-07-26 01:16:45.007710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.842 [2024-07-26 01:16:45.007737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.842 qpair failed and we were unable to recover it. 00:34:14.842 [2024-07-26 01:16:45.007871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.842 [2024-07-26 01:16:45.007898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.842 qpair failed and we were unable to recover it. 00:34:14.842 [2024-07-26 01:16:45.008034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.842 [2024-07-26 01:16:45.008066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.842 qpair failed and we were unable to recover it. 00:34:14.842 [2024-07-26 01:16:45.008209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.842 [2024-07-26 01:16:45.008236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.842 qpair failed and we were unable to recover it. 00:34:14.842 [2024-07-26 01:16:45.008400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.842 [2024-07-26 01:16:45.008427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.842 qpair failed and we were unable to recover it. 00:34:14.842 [2024-07-26 01:16:45.008559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.842 [2024-07-26 01:16:45.008587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.842 qpair failed and we were unable to recover it. 00:34:14.842 [2024-07-26 01:16:45.008715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.842 [2024-07-26 01:16:45.008741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.842 qpair failed and we were unable to recover it. 00:34:14.842 [2024-07-26 01:16:45.008906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.842 [2024-07-26 01:16:45.008934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.842 qpair failed and we were unable to recover it. 00:34:14.842 [2024-07-26 01:16:45.009047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.842 [2024-07-26 01:16:45.009079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.842 qpair failed and we were unable to recover it. 00:34:14.842 [2024-07-26 01:16:45.009190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.842 [2024-07-26 01:16:45.009216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.842 qpair failed and we were unable to recover it. 00:34:14.842 [2024-07-26 01:16:45.009366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.842 [2024-07-26 01:16:45.009393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.842 qpair failed and we were unable to recover it. 00:34:14.842 [2024-07-26 01:16:45.009534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.842 [2024-07-26 01:16:45.009561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.842 qpair failed and we were unable to recover it. 00:34:14.842 [2024-07-26 01:16:45.009701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.842 [2024-07-26 01:16:45.009728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.842 qpair failed and we were unable to recover it. 00:34:14.842 [2024-07-26 01:16:45.009878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.842 [2024-07-26 01:16:45.009912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.842 qpair failed and we were unable to recover it. 00:34:14.842 [2024-07-26 01:16:45.010029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.842 [2024-07-26 01:16:45.010056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.842 qpair failed and we were unable to recover it. 00:34:14.842 [2024-07-26 01:16:45.010179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.842 [2024-07-26 01:16:45.010206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.842 qpair failed and we were unable to recover it. 00:34:14.842 [2024-07-26 01:16:45.010345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.842 [2024-07-26 01:16:45.010373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.842 qpair failed and we were unable to recover it. 00:34:14.842 [2024-07-26 01:16:45.010514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.842 [2024-07-26 01:16:45.010540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.842 qpair failed and we were unable to recover it. 00:34:14.842 [2024-07-26 01:16:45.010679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.842 [2024-07-26 01:16:45.010706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.842 qpair failed and we were unable to recover it. 00:34:14.842 [2024-07-26 01:16:45.010847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.842 [2024-07-26 01:16:45.010875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.842 qpair failed and we were unable to recover it. 00:34:14.842 [2024-07-26 01:16:45.011004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.842 [2024-07-26 01:16:45.011030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.842 qpair failed and we were unable to recover it. 00:34:14.842 [2024-07-26 01:16:45.011146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.842 [2024-07-26 01:16:45.011173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.842 qpair failed and we were unable to recover it. 00:34:14.842 [2024-07-26 01:16:45.011285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.842 [2024-07-26 01:16:45.011312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.842 qpair failed and we were unable to recover it. 00:34:14.842 [2024-07-26 01:16:45.011439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.842 [2024-07-26 01:16:45.011465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.842 qpair failed and we were unable to recover it. 00:34:14.842 [2024-07-26 01:16:45.011626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.842 [2024-07-26 01:16:45.011653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.842 qpair failed and we were unable to recover it. 00:34:14.842 [2024-07-26 01:16:45.011771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.842 [2024-07-26 01:16:45.011798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.842 qpair failed and we were unable to recover it. 00:34:14.842 [2024-07-26 01:16:45.011915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.842 [2024-07-26 01:16:45.011942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.842 qpair failed and we were unable to recover it. 00:34:14.842 [2024-07-26 01:16:45.012046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.842 [2024-07-26 01:16:45.012080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.842 qpair failed and we were unable to recover it. 00:34:14.842 [2024-07-26 01:16:45.012218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.842 [2024-07-26 01:16:45.012245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.842 qpair failed and we were unable to recover it. 00:34:14.842 [2024-07-26 01:16:45.012382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.842 [2024-07-26 01:16:45.012409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.842 qpair failed and we were unable to recover it. 00:34:14.842 [2024-07-26 01:16:45.012549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.842 [2024-07-26 01:16:45.012580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.843 qpair failed and we were unable to recover it. 00:34:14.843 [2024-07-26 01:16:45.012691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.843 [2024-07-26 01:16:45.012718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.843 qpair failed and we were unable to recover it. 00:34:14.843 [2024-07-26 01:16:45.012847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.843 [2024-07-26 01:16:45.012874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.843 qpair failed and we were unable to recover it. 00:34:14.843 [2024-07-26 01:16:45.013018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.843 [2024-07-26 01:16:45.013045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.843 qpair failed and we were unable to recover it. 00:34:14.843 [2024-07-26 01:16:45.013190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.843 [2024-07-26 01:16:45.013217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.843 qpair failed and we were unable to recover it. 00:34:14.843 [2024-07-26 01:16:45.013346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.843 [2024-07-26 01:16:45.013374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.843 qpair failed and we were unable to recover it. 00:34:14.843 [2024-07-26 01:16:45.013513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.843 [2024-07-26 01:16:45.013540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.843 qpair failed and we were unable to recover it. 00:34:14.843 [2024-07-26 01:16:45.013676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.843 [2024-07-26 01:16:45.013704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.843 qpair failed and we were unable to recover it. 00:34:14.843 [2024-07-26 01:16:45.013809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.843 [2024-07-26 01:16:45.013836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.843 qpair failed and we were unable to recover it. 00:34:14.843 [2024-07-26 01:16:45.013944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.843 [2024-07-26 01:16:45.013971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.843 qpair failed and we were unable to recover it. 00:34:14.843 [2024-07-26 01:16:45.014084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.843 [2024-07-26 01:16:45.014113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.843 qpair failed and we were unable to recover it. 00:34:14.843 [2024-07-26 01:16:45.014219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.843 [2024-07-26 01:16:45.014247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.843 qpair failed and we were unable to recover it. 00:34:14.843 [2024-07-26 01:16:45.014380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.843 [2024-07-26 01:16:45.014407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.843 qpair failed and we were unable to recover it. 00:34:14.843 [2024-07-26 01:16:45.014524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.843 [2024-07-26 01:16:45.014552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.843 qpair failed and we were unable to recover it. 00:34:14.843 [2024-07-26 01:16:45.014708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.843 [2024-07-26 01:16:45.014736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.843 qpair failed and we were unable to recover it. 00:34:14.843 [2024-07-26 01:16:45.014842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.843 [2024-07-26 01:16:45.014869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.843 qpair failed and we were unable to recover it. 00:34:14.843 [2024-07-26 01:16:45.015040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.843 [2024-07-26 01:16:45.015075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.843 qpair failed and we were unable to recover it. 00:34:14.843 [2024-07-26 01:16:45.015195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.843 [2024-07-26 01:16:45.015223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.843 qpair failed and we were unable to recover it. 00:34:14.843 [2024-07-26 01:16:45.015366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.843 [2024-07-26 01:16:45.015394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.843 qpair failed and we were unable to recover it. 00:34:14.843 [2024-07-26 01:16:45.015534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.843 [2024-07-26 01:16:45.015562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.843 qpair failed and we were unable to recover it. 00:34:14.843 [2024-07-26 01:16:45.015671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.843 [2024-07-26 01:16:45.015698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.843 qpair failed and we were unable to recover it. 00:34:14.843 [2024-07-26 01:16:45.015812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.843 [2024-07-26 01:16:45.015839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.843 qpair failed and we were unable to recover it. 00:34:14.843 [2024-07-26 01:16:45.015975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.843 [2024-07-26 01:16:45.016003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.843 qpair failed and we were unable to recover it. 00:34:14.843 [2024-07-26 01:16:45.016120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.843 [2024-07-26 01:16:45.016148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.843 qpair failed and we were unable to recover it. 00:34:14.843 [2024-07-26 01:16:45.016256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.843 [2024-07-26 01:16:45.016284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.843 qpair failed and we were unable to recover it. 00:34:14.843 [2024-07-26 01:16:45.016428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.843 [2024-07-26 01:16:45.016456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.843 qpair failed and we were unable to recover it. 00:34:14.843 [2024-07-26 01:16:45.016569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.843 [2024-07-26 01:16:45.016596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.843 qpair failed and we were unable to recover it. 00:34:14.843 [2024-07-26 01:16:45.016709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.843 [2024-07-26 01:16:45.016736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.843 qpair failed and we were unable to recover it. 00:34:14.843 [2024-07-26 01:16:45.016876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.843 [2024-07-26 01:16:45.016904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.843 qpair failed and we were unable to recover it. 00:34:14.843 [2024-07-26 01:16:45.017033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.843 [2024-07-26 01:16:45.017078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.843 qpair failed and we were unable to recover it. 00:34:14.843 [2024-07-26 01:16:45.017188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.843 [2024-07-26 01:16:45.017215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.843 qpair failed and we were unable to recover it. 00:34:14.843 [2024-07-26 01:16:45.017347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.843 [2024-07-26 01:16:45.017376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.843 qpair failed and we were unable to recover it. 00:34:14.843 [2024-07-26 01:16:45.017512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.843 [2024-07-26 01:16:45.017539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.843 qpair failed and we were unable to recover it. 00:34:14.843 [2024-07-26 01:16:45.017674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.843 [2024-07-26 01:16:45.017701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.843 qpair failed and we were unable to recover it. 00:34:14.843 [2024-07-26 01:16:45.017810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.843 [2024-07-26 01:16:45.017838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.843 qpair failed and we were unable to recover it. 00:34:14.843 [2024-07-26 01:16:45.017999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.843 [2024-07-26 01:16:45.018027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.843 qpair failed and we were unable to recover it. 00:34:14.843 [2024-07-26 01:16:45.018138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.843 [2024-07-26 01:16:45.018166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.843 qpair failed and we were unable to recover it. 00:34:14.843 [2024-07-26 01:16:45.018309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.843 [2024-07-26 01:16:45.018337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.843 qpair failed and we were unable to recover it. 00:34:14.843 [2024-07-26 01:16:45.018469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.844 [2024-07-26 01:16:45.018496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.844 qpair failed and we were unable to recover it. 00:34:14.844 [2024-07-26 01:16:45.018633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.844 [2024-07-26 01:16:45.018661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.844 qpair failed and we were unable to recover it. 00:34:14.844 [2024-07-26 01:16:45.018790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.844 [2024-07-26 01:16:45.018817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.844 qpair failed and we were unable to recover it. 00:34:14.844 [2024-07-26 01:16:45.018965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.844 [2024-07-26 01:16:45.018991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.844 qpair failed and we were unable to recover it. 00:34:14.844 [2024-07-26 01:16:45.019094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.844 [2024-07-26 01:16:45.019122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.844 qpair failed and we were unable to recover it. 00:34:14.844 [2024-07-26 01:16:45.019316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.844 [2024-07-26 01:16:45.019343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.844 qpair failed and we were unable to recover it. 00:34:14.844 [2024-07-26 01:16:45.019475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.844 [2024-07-26 01:16:45.019502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.844 qpair failed and we were unable to recover it. 00:34:14.844 [2024-07-26 01:16:45.019619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.844 [2024-07-26 01:16:45.019645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.844 qpair failed and we were unable to recover it. 00:34:14.844 [2024-07-26 01:16:45.019759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.844 [2024-07-26 01:16:45.019786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.844 qpair failed and we were unable to recover it. 00:34:14.844 [2024-07-26 01:16:45.019921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.844 [2024-07-26 01:16:45.019947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.844 qpair failed and we were unable to recover it. 00:34:14.844 [2024-07-26 01:16:45.020066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.844 [2024-07-26 01:16:45.020092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.844 qpair failed and we were unable to recover it. 00:34:14.844 [2024-07-26 01:16:45.020203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.844 [2024-07-26 01:16:45.020230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.844 qpair failed and we were unable to recover it. 00:34:14.844 [2024-07-26 01:16:45.020363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.844 [2024-07-26 01:16:45.020390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.844 qpair failed and we were unable to recover it. 00:34:14.844 [2024-07-26 01:16:45.020518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.844 [2024-07-26 01:16:45.020545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.844 qpair failed and we were unable to recover it. 00:34:14.844 [2024-07-26 01:16:45.020653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.844 [2024-07-26 01:16:45.020680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.844 qpair failed and we were unable to recover it. 00:34:14.844 [2024-07-26 01:16:45.020791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.844 [2024-07-26 01:16:45.020818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.844 qpair failed and we were unable to recover it. 00:34:14.844 [2024-07-26 01:16:45.020948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.844 [2024-07-26 01:16:45.020975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.844 qpair failed and we were unable to recover it. 00:34:14.844 [2024-07-26 01:16:45.021095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.844 [2024-07-26 01:16:45.021122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.844 qpair failed and we were unable to recover it. 00:34:14.844 [2024-07-26 01:16:45.021237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.844 [2024-07-26 01:16:45.021264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.844 qpair failed and we were unable to recover it. 00:34:14.844 [2024-07-26 01:16:45.021364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.844 [2024-07-26 01:16:45.021391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.844 qpair failed and we were unable to recover it. 00:34:14.844 [2024-07-26 01:16:45.021529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.844 [2024-07-26 01:16:45.021557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.844 qpair failed and we were unable to recover it. 00:34:14.844 [2024-07-26 01:16:45.021702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.844 [2024-07-26 01:16:45.021729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.844 qpair failed and we were unable to recover it. 00:34:14.844 [2024-07-26 01:16:45.021834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.844 [2024-07-26 01:16:45.021860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.844 qpair failed and we were unable to recover it. 00:34:14.844 [2024-07-26 01:16:45.022027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.844 [2024-07-26 01:16:45.022054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.844 qpair failed and we were unable to recover it. 00:34:14.844 [2024-07-26 01:16:45.022197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.844 [2024-07-26 01:16:45.022224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.844 qpair failed and we were unable to recover it. 00:34:14.844 [2024-07-26 01:16:45.022329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.844 [2024-07-26 01:16:45.022355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.844 qpair failed and we were unable to recover it. 00:34:14.844 [2024-07-26 01:16:45.022494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.844 [2024-07-26 01:16:45.022522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.844 qpair failed and we were unable to recover it. 00:34:14.844 [2024-07-26 01:16:45.022621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.844 [2024-07-26 01:16:45.022647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.844 qpair failed and we were unable to recover it. 00:34:14.844 [2024-07-26 01:16:45.022814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.844 [2024-07-26 01:16:45.022840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.844 qpair failed and we were unable to recover it. 00:34:14.844 [2024-07-26 01:16:45.022952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.844 [2024-07-26 01:16:45.022979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.844 qpair failed and we were unable to recover it. 00:34:14.844 [2024-07-26 01:16:45.023161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.844 [2024-07-26 01:16:45.023192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.844 qpair failed and we were unable to recover it. 00:34:14.844 [2024-07-26 01:16:45.023294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.844 [2024-07-26 01:16:45.023320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.844 qpair failed and we were unable to recover it. 00:34:14.844 [2024-07-26 01:16:45.023451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.844 [2024-07-26 01:16:45.023479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.844 qpair failed and we were unable to recover it. 00:34:14.844 [2024-07-26 01:16:45.023617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.844 [2024-07-26 01:16:45.023644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.844 qpair failed and we were unable to recover it. 00:34:14.844 [2024-07-26 01:16:45.023747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.844 [2024-07-26 01:16:45.023773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.844 qpair failed and we were unable to recover it. 00:34:14.844 [2024-07-26 01:16:45.023884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.844 [2024-07-26 01:16:45.023911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.844 qpair failed and we were unable to recover it. 00:34:14.844 [2024-07-26 01:16:45.024016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.844 [2024-07-26 01:16:45.024043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.844 qpair failed and we were unable to recover it. 00:34:14.844 [2024-07-26 01:16:45.024179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.844 [2024-07-26 01:16:45.024206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.844 qpair failed and we were unable to recover it. 00:34:14.844 [2024-07-26 01:16:45.024343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.845 [2024-07-26 01:16:45.024371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.845 qpair failed and we were unable to recover it. 00:34:14.845 [2024-07-26 01:16:45.024554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.845 [2024-07-26 01:16:45.024581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.845 qpair failed and we were unable to recover it. 00:34:14.845 [2024-07-26 01:16:45.024713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.845 [2024-07-26 01:16:45.024739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.845 qpair failed and we were unable to recover it. 00:34:14.845 [2024-07-26 01:16:45.024876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.845 [2024-07-26 01:16:45.024903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.845 qpair failed and we were unable to recover it. 00:34:14.845 [2024-07-26 01:16:45.025047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.845 [2024-07-26 01:16:45.025080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.845 qpair failed and we were unable to recover it. 00:34:14.845 [2024-07-26 01:16:45.025184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.845 [2024-07-26 01:16:45.025211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.845 qpair failed and we were unable to recover it. 00:34:14.845 [2024-07-26 01:16:45.025353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.845 [2024-07-26 01:16:45.025380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.845 qpair failed and we were unable to recover it. 00:34:14.845 [2024-07-26 01:16:45.025508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.845 [2024-07-26 01:16:45.025535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.845 qpair failed and we were unable to recover it. 00:34:14.845 [2024-07-26 01:16:45.025637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.845 [2024-07-26 01:16:45.025664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.845 qpair failed and we were unable to recover it. 00:34:14.845 [2024-07-26 01:16:45.025799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.845 [2024-07-26 01:16:45.025826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.845 qpair failed and we were unable to recover it. 00:34:14.845 [2024-07-26 01:16:45.025961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.845 [2024-07-26 01:16:45.025987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.845 qpair failed and we were unable to recover it. 00:34:14.845 [2024-07-26 01:16:45.026099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.845 [2024-07-26 01:16:45.026127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.845 qpair failed and we were unable to recover it. 00:34:14.845 [2024-07-26 01:16:45.026236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.845 [2024-07-26 01:16:45.026264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.845 qpair failed and we were unable to recover it. 00:34:14.845 [2024-07-26 01:16:45.026362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.845 [2024-07-26 01:16:45.026389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.845 qpair failed and we were unable to recover it. 00:34:14.845 [2024-07-26 01:16:45.026529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.845 [2024-07-26 01:16:45.026555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.845 qpair failed and we were unable to recover it. 00:34:14.845 [2024-07-26 01:16:45.026701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.845 [2024-07-26 01:16:45.026728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.845 qpair failed and we were unable to recover it. 00:34:14.845 [2024-07-26 01:16:45.026844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.845 [2024-07-26 01:16:45.026871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.845 qpair failed and we were unable to recover it. 00:34:14.845 [2024-07-26 01:16:45.026996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.845 [2024-07-26 01:16:45.027023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.845 qpair failed and we were unable to recover it. 00:34:14.845 [2024-07-26 01:16:45.027162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.845 [2024-07-26 01:16:45.027189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.845 qpair failed and we were unable to recover it. 00:34:14.845 [2024-07-26 01:16:45.027320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.845 [2024-07-26 01:16:45.027347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.845 qpair failed and we were unable to recover it. 00:34:14.845 [2024-07-26 01:16:45.027452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.845 [2024-07-26 01:16:45.027479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.845 qpair failed and we were unable to recover it. 00:34:14.845 [2024-07-26 01:16:45.027618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.845 [2024-07-26 01:16:45.027644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.845 qpair failed and we were unable to recover it. 00:34:14.845 [2024-07-26 01:16:45.027749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.845 [2024-07-26 01:16:45.027775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.845 qpair failed and we were unable to recover it. 00:34:14.845 [2024-07-26 01:16:45.027883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.845 [2024-07-26 01:16:45.027910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.845 qpair failed and we were unable to recover it. 00:34:14.845 [2024-07-26 01:16:45.028042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.845 [2024-07-26 01:16:45.028074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.845 qpair failed and we were unable to recover it. 00:34:14.845 [2024-07-26 01:16:45.028182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.845 [2024-07-26 01:16:45.028209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.845 qpair failed and we were unable to recover it. 00:34:14.845 [2024-07-26 01:16:45.028348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.845 [2024-07-26 01:16:45.028374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.845 qpair failed and we were unable to recover it. 00:34:14.845 [2024-07-26 01:16:45.028514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.845 [2024-07-26 01:16:45.028542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.845 qpair failed and we were unable to recover it. 00:34:14.845 [2024-07-26 01:16:45.028673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.845 [2024-07-26 01:16:45.028700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.845 qpair failed and we were unable to recover it. 00:34:14.845 [2024-07-26 01:16:45.028837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.845 [2024-07-26 01:16:45.028863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.845 qpair failed and we were unable to recover it. 00:34:14.845 [2024-07-26 01:16:45.028967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.845 [2024-07-26 01:16:45.028994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.845 qpair failed and we were unable to recover it. 00:34:14.845 [2024-07-26 01:16:45.029154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.845 [2024-07-26 01:16:45.029182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.845 qpair failed and we were unable to recover it. 00:34:14.845 [2024-07-26 01:16:45.029322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.845 [2024-07-26 01:16:45.029349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.845 qpair failed and we were unable to recover it. 00:34:14.846 [2024-07-26 01:16:45.029515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.846 [2024-07-26 01:16:45.029546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.846 qpair failed and we were unable to recover it. 00:34:14.846 [2024-07-26 01:16:45.029658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.846 [2024-07-26 01:16:45.029685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.846 qpair failed and we were unable to recover it. 00:34:14.846 [2024-07-26 01:16:45.029821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.846 [2024-07-26 01:16:45.029848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.846 qpair failed and we were unable to recover it. 00:34:14.846 [2024-07-26 01:16:45.029990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.846 [2024-07-26 01:16:45.030017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.846 qpair failed and we were unable to recover it. 00:34:14.846 [2024-07-26 01:16:45.030180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.846 [2024-07-26 01:16:45.030207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.846 qpair failed and we were unable to recover it. 00:34:14.846 [2024-07-26 01:16:45.030321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.846 [2024-07-26 01:16:45.030348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.846 qpair failed and we were unable to recover it. 00:34:14.846 [2024-07-26 01:16:45.030468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.846 [2024-07-26 01:16:45.030496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.846 qpair failed and we were unable to recover it. 00:34:14.846 [2024-07-26 01:16:45.030635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.846 [2024-07-26 01:16:45.030662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.846 qpair failed and we were unable to recover it. 00:34:14.846 [2024-07-26 01:16:45.030769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.846 [2024-07-26 01:16:45.030796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.846 qpair failed and we were unable to recover it. 00:34:14.846 [2024-07-26 01:16:45.030937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.846 [2024-07-26 01:16:45.030964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.846 qpair failed and we were unable to recover it. 00:34:14.846 [2024-07-26 01:16:45.031089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.846 [2024-07-26 01:16:45.031117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.846 qpair failed and we were unable to recover it. 00:34:14.846 [2024-07-26 01:16:45.031255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.846 [2024-07-26 01:16:45.031282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.846 qpair failed and we were unable to recover it. 00:34:14.846 [2024-07-26 01:16:45.031398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.846 [2024-07-26 01:16:45.031425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.846 qpair failed and we were unable to recover it. 00:34:14.846 [2024-07-26 01:16:45.031561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.846 [2024-07-26 01:16:45.031588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.846 qpair failed and we were unable to recover it. 00:34:14.846 [2024-07-26 01:16:45.031746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.846 [2024-07-26 01:16:45.031773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.846 qpair failed and we were unable to recover it. 00:34:14.846 [2024-07-26 01:16:45.031914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.846 [2024-07-26 01:16:45.031942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.846 qpair failed and we were unable to recover it. 00:34:14.846 [2024-07-26 01:16:45.032111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.846 [2024-07-26 01:16:45.032139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.846 qpair failed and we were unable to recover it. 00:34:14.846 [2024-07-26 01:16:45.032275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.846 [2024-07-26 01:16:45.032302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.846 qpair failed and we were unable to recover it. 00:34:14.846 [2024-07-26 01:16:45.032441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.846 [2024-07-26 01:16:45.032470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.846 qpair failed and we were unable to recover it. 00:34:14.846 [2024-07-26 01:16:45.032585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.846 [2024-07-26 01:16:45.032611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.846 qpair failed and we were unable to recover it. 00:34:14.846 [2024-07-26 01:16:45.032751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.846 [2024-07-26 01:16:45.032777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.846 qpair failed and we were unable to recover it. 00:34:14.846 [2024-07-26 01:16:45.032913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.846 [2024-07-26 01:16:45.032940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.846 qpair failed and we were unable to recover it. 00:34:14.846 [2024-07-26 01:16:45.033073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.846 [2024-07-26 01:16:45.033100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.846 qpair failed and we were unable to recover it. 00:34:14.846 [2024-07-26 01:16:45.033204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.846 [2024-07-26 01:16:45.033230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.846 qpair failed and we were unable to recover it. 00:34:14.846 [2024-07-26 01:16:45.033378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.846 [2024-07-26 01:16:45.033405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.846 qpair failed and we were unable to recover it. 00:34:14.846 [2024-07-26 01:16:45.033544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.846 [2024-07-26 01:16:45.033570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.846 qpair failed and we were unable to recover it. 00:34:14.846 [2024-07-26 01:16:45.033675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.846 [2024-07-26 01:16:45.033702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.846 qpair failed and we were unable to recover it. 00:34:14.846 [2024-07-26 01:16:45.033828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.846 [2024-07-26 01:16:45.033859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.846 qpair failed and we were unable to recover it. 00:34:14.846 [2024-07-26 01:16:45.034003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.846 [2024-07-26 01:16:45.034030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.846 qpair failed and we were unable to recover it. 00:34:14.846 [2024-07-26 01:16:45.034149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.846 [2024-07-26 01:16:45.034177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.846 qpair failed and we were unable to recover it. 00:34:14.846 [2024-07-26 01:16:45.034319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.846 [2024-07-26 01:16:45.034347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.846 qpair failed and we were unable to recover it. 00:34:14.846 [2024-07-26 01:16:45.034463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.846 [2024-07-26 01:16:45.034489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.846 qpair failed and we were unable to recover it. 00:34:14.846 [2024-07-26 01:16:45.034602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.846 [2024-07-26 01:16:45.034629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.846 qpair failed and we were unable to recover it. 00:34:14.846 [2024-07-26 01:16:45.034781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.846 [2024-07-26 01:16:45.034808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.846 qpair failed and we were unable to recover it. 00:34:14.846 [2024-07-26 01:16:45.034923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.846 [2024-07-26 01:16:45.034950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.846 qpair failed and we were unable to recover it. 00:34:14.846 [2024-07-26 01:16:45.035095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.846 [2024-07-26 01:16:45.035123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.846 qpair failed and we were unable to recover it. 00:34:14.846 [2024-07-26 01:16:45.035241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.846 [2024-07-26 01:16:45.035269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.846 qpair failed and we were unable to recover it. 00:34:14.846 [2024-07-26 01:16:45.035404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.846 [2024-07-26 01:16:45.035431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.847 qpair failed and we were unable to recover it. 00:34:14.847 [2024-07-26 01:16:45.035567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.847 [2024-07-26 01:16:45.035594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.847 qpair failed and we were unable to recover it. 00:34:14.847 [2024-07-26 01:16:45.035703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.847 [2024-07-26 01:16:45.035731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.847 qpair failed and we were unable to recover it. 00:34:14.847 [2024-07-26 01:16:45.035872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.847 [2024-07-26 01:16:45.035899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.847 qpair failed and we were unable to recover it. 00:34:14.847 [2024-07-26 01:16:45.036038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.847 [2024-07-26 01:16:45.036073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.847 qpair failed and we were unable to recover it. 00:34:14.847 [2024-07-26 01:16:45.036240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.847 [2024-07-26 01:16:45.036269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.847 qpair failed and we were unable to recover it. 00:34:14.847 [2024-07-26 01:16:45.036378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.847 [2024-07-26 01:16:45.036406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.847 qpair failed and we were unable to recover it. 00:34:14.847 [2024-07-26 01:16:45.036511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.847 [2024-07-26 01:16:45.036538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.847 qpair failed and we were unable to recover it. 00:34:14.847 [2024-07-26 01:16:45.036706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.847 [2024-07-26 01:16:45.036733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.847 qpair failed and we were unable to recover it. 00:34:14.847 [2024-07-26 01:16:45.036868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.847 [2024-07-26 01:16:45.036895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.847 qpair failed and we were unable to recover it. 00:34:14.847 [2024-07-26 01:16:45.037036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.847 [2024-07-26 01:16:45.037070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.847 qpair failed and we were unable to recover it. 00:34:14.847 [2024-07-26 01:16:45.037240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.847 [2024-07-26 01:16:45.037268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.847 qpair failed and we were unable to recover it. 00:34:14.847 [2024-07-26 01:16:45.037374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.847 [2024-07-26 01:16:45.037401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.847 qpair failed and we were unable to recover it. 00:34:14.847 [2024-07-26 01:16:45.037519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.847 [2024-07-26 01:16:45.037545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.847 qpair failed and we were unable to recover it. 00:34:14.847 [2024-07-26 01:16:45.037694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.847 [2024-07-26 01:16:45.037722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.847 qpair failed and we were unable to recover it. 00:34:14.847 [2024-07-26 01:16:45.037861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.847 [2024-07-26 01:16:45.037888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.847 qpair failed and we were unable to recover it. 00:34:14.847 [2024-07-26 01:16:45.038028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.847 [2024-07-26 01:16:45.038055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.847 qpair failed and we were unable to recover it. 00:34:14.847 [2024-07-26 01:16:45.038186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.847 [2024-07-26 01:16:45.038214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.847 qpair failed and we were unable to recover it. 00:34:14.847 [2024-07-26 01:16:45.038356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.847 [2024-07-26 01:16:45.038383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.847 qpair failed and we were unable to recover it. 00:34:14.847 [2024-07-26 01:16:45.038488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.847 [2024-07-26 01:16:45.038515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.847 qpair failed and we were unable to recover it. 00:34:14.847 [2024-07-26 01:16:45.038631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.847 [2024-07-26 01:16:45.038659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.847 qpair failed and we were unable to recover it. 00:34:14.847 [2024-07-26 01:16:45.038770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.847 [2024-07-26 01:16:45.038799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.847 qpair failed and we were unable to recover it. 00:34:14.847 [2024-07-26 01:16:45.038902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.847 [2024-07-26 01:16:45.038929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.847 qpair failed and we were unable to recover it. 00:34:14.847 [2024-07-26 01:16:45.039036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.847 [2024-07-26 01:16:45.039080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.847 qpair failed and we were unable to recover it. 00:34:14.847 [2024-07-26 01:16:45.039215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.847 [2024-07-26 01:16:45.039242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.847 qpair failed and we were unable to recover it. 00:34:14.847 [2024-07-26 01:16:45.039373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.847 [2024-07-26 01:16:45.039401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.847 qpair failed and we were unable to recover it. 00:34:14.847 [2024-07-26 01:16:45.039568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.847 [2024-07-26 01:16:45.039596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.847 qpair failed and we were unable to recover it. 00:34:14.847 [2024-07-26 01:16:45.039726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.847 [2024-07-26 01:16:45.039754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.847 qpair failed and we were unable to recover it. 00:34:14.847 [2024-07-26 01:16:45.039889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.847 [2024-07-26 01:16:45.039916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.847 qpair failed and we were unable to recover it. 00:34:14.847 [2024-07-26 01:16:45.040044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.847 [2024-07-26 01:16:45.040079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.847 qpair failed and we were unable to recover it. 00:34:14.847 [2024-07-26 01:16:45.040215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.847 [2024-07-26 01:16:45.040242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.847 qpair failed and we were unable to recover it. 00:34:14.847 [2024-07-26 01:16:45.040373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.847 [2024-07-26 01:16:45.040404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.847 qpair failed and we were unable to recover it. 00:34:14.847 [2024-07-26 01:16:45.040514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.847 [2024-07-26 01:16:45.040542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.847 qpair failed and we were unable to recover it. 00:34:14.847 [2024-07-26 01:16:45.040647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.847 [2024-07-26 01:16:45.040674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.847 qpair failed and we were unable to recover it. 00:34:14.847 [2024-07-26 01:16:45.040806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.847 [2024-07-26 01:16:45.040834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.847 qpair failed and we were unable to recover it. 00:34:14.847 [2024-07-26 01:16:45.040973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.847 [2024-07-26 01:16:45.041002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.847 qpair failed and we were unable to recover it. 00:34:14.847 [2024-07-26 01:16:45.041135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.847 [2024-07-26 01:16:45.041163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.847 qpair failed and we were unable to recover it. 00:34:14.847 [2024-07-26 01:16:45.041330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.847 [2024-07-26 01:16:45.041358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.847 qpair failed and we were unable to recover it. 00:34:14.847 [2024-07-26 01:16:45.041499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.847 [2024-07-26 01:16:45.041527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.847 qpair failed and we were unable to recover it. 00:34:14.848 [2024-07-26 01:16:45.041638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.848 [2024-07-26 01:16:45.041665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.848 qpair failed and we were unable to recover it. 00:34:14.848 [2024-07-26 01:16:45.041766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.848 [2024-07-26 01:16:45.041794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.848 qpair failed and we were unable to recover it. 00:34:14.848 [2024-07-26 01:16:45.041931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.848 [2024-07-26 01:16:45.041958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.848 qpair failed and we were unable to recover it. 00:34:14.848 [2024-07-26 01:16:45.042091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.848 [2024-07-26 01:16:45.042119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.848 qpair failed and we were unable to recover it. 00:34:14.848 [2024-07-26 01:16:45.042229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.848 [2024-07-26 01:16:45.042257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.848 qpair failed and we were unable to recover it. 00:34:14.848 [2024-07-26 01:16:45.042424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.848 [2024-07-26 01:16:45.042451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.848 qpair failed and we were unable to recover it. 00:34:14.848 [2024-07-26 01:16:45.042593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.848 [2024-07-26 01:16:45.042620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.848 qpair failed and we were unable to recover it. 00:34:14.848 [2024-07-26 01:16:45.042721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.848 [2024-07-26 01:16:45.042749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.848 qpair failed and we were unable to recover it. 00:34:14.848 [2024-07-26 01:16:45.042865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.848 [2024-07-26 01:16:45.042893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.848 qpair failed and we were unable to recover it. 00:34:14.848 [2024-07-26 01:16:45.043026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.848 [2024-07-26 01:16:45.043054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.848 qpair failed and we were unable to recover it. 00:34:14.848 [2024-07-26 01:16:45.043221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.848 [2024-07-26 01:16:45.043248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.848 qpair failed and we were unable to recover it. 00:34:14.848 [2024-07-26 01:16:45.043356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.848 [2024-07-26 01:16:45.043384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.848 qpair failed and we were unable to recover it. 00:34:14.848 [2024-07-26 01:16:45.043552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.848 [2024-07-26 01:16:45.043579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.848 qpair failed and we were unable to recover it. 00:34:14.848 [2024-07-26 01:16:45.043688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.848 [2024-07-26 01:16:45.043716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.848 qpair failed and we were unable to recover it. 00:34:14.848 [2024-07-26 01:16:45.043878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.848 [2024-07-26 01:16:45.043907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.848 qpair failed and we were unable to recover it. 00:34:14.848 [2024-07-26 01:16:45.044035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.848 [2024-07-26 01:16:45.044069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.848 qpair failed and we were unable to recover it. 00:34:14.848 [2024-07-26 01:16:45.044209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.848 [2024-07-26 01:16:45.044237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.848 qpair failed and we were unable to recover it. 00:34:14.848 [2024-07-26 01:16:45.044347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.848 [2024-07-26 01:16:45.044375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.848 qpair failed and we were unable to recover it. 00:34:14.848 [2024-07-26 01:16:45.044497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.848 [2024-07-26 01:16:45.044524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.848 qpair failed and we were unable to recover it. 00:34:14.848 [2024-07-26 01:16:45.044627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.848 [2024-07-26 01:16:45.044659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.848 qpair failed and we were unable to recover it. 00:34:14.848 [2024-07-26 01:16:45.044796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.848 [2024-07-26 01:16:45.044824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.848 qpair failed and we were unable to recover it. 00:34:14.848 [2024-07-26 01:16:45.044945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.848 [2024-07-26 01:16:45.044972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.848 qpair failed and we were unable to recover it. 00:34:14.848 [2024-07-26 01:16:45.045106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.848 [2024-07-26 01:16:45.045134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.848 qpair failed and we were unable to recover it. 00:34:14.848 [2024-07-26 01:16:45.045267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.848 [2024-07-26 01:16:45.045295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.848 qpair failed and we were unable to recover it. 00:34:14.848 [2024-07-26 01:16:45.045434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.848 [2024-07-26 01:16:45.045461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.848 qpair failed and we were unable to recover it. 00:34:14.848 [2024-07-26 01:16:45.045596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.848 [2024-07-26 01:16:45.045623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.848 qpair failed and we were unable to recover it. 00:34:14.848 [2024-07-26 01:16:45.045752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.848 [2024-07-26 01:16:45.045780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.848 qpair failed and we were unable to recover it. 00:34:14.848 [2024-07-26 01:16:45.045910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.848 [2024-07-26 01:16:45.045937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.848 qpair failed and we were unable to recover it. 00:34:14.848 [2024-07-26 01:16:45.046074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.848 [2024-07-26 01:16:45.046102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.848 qpair failed and we were unable to recover it. 00:34:14.848 [2024-07-26 01:16:45.046229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.848 [2024-07-26 01:16:45.046256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.848 qpair failed and we were unable to recover it. 00:34:14.848 [2024-07-26 01:16:45.046398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.848 [2024-07-26 01:16:45.046427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.848 qpair failed and we were unable to recover it. 00:34:14.848 [2024-07-26 01:16:45.046561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.848 [2024-07-26 01:16:45.046588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.848 qpair failed and we were unable to recover it. 00:34:14.848 [2024-07-26 01:16:45.046723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.848 [2024-07-26 01:16:45.046750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.848 qpair failed and we were unable to recover it. 00:34:14.848 [2024-07-26 01:16:45.046897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.848 [2024-07-26 01:16:45.046924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.848 qpair failed and we were unable to recover it. 00:34:14.848 [2024-07-26 01:16:45.047057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.848 [2024-07-26 01:16:45.047089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.848 qpair failed and we were unable to recover it. 00:34:14.848 [2024-07-26 01:16:45.047209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.848 [2024-07-26 01:16:45.047237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.848 qpair failed and we were unable to recover it. 00:34:14.848 [2024-07-26 01:16:45.047370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.848 [2024-07-26 01:16:45.047397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.848 qpair failed and we were unable to recover it. 00:34:14.848 [2024-07-26 01:16:45.047535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.848 [2024-07-26 01:16:45.047563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.848 qpair failed and we were unable to recover it. 00:34:14.848 [2024-07-26 01:16:45.047703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.849 [2024-07-26 01:16:45.047730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.849 qpair failed and we were unable to recover it. 00:34:14.849 [2024-07-26 01:16:45.047840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.849 [2024-07-26 01:16:45.047867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.849 qpair failed and we were unable to recover it. 00:34:14.849 [2024-07-26 01:16:45.047976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.849 [2024-07-26 01:16:45.048003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.849 qpair failed and we were unable to recover it. 00:34:14.849 [2024-07-26 01:16:45.048149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.849 [2024-07-26 01:16:45.048178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.849 qpair failed and we were unable to recover it. 00:34:14.849 [2024-07-26 01:16:45.048338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.849 [2024-07-26 01:16:45.048366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.849 qpair failed and we were unable to recover it. 00:34:14.849 [2024-07-26 01:16:45.048476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.849 [2024-07-26 01:16:45.048502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.849 qpair failed and we were unable to recover it. 00:34:14.849 [2024-07-26 01:16:45.048618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.849 [2024-07-26 01:16:45.048646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.849 qpair failed and we were unable to recover it. 00:34:14.849 [2024-07-26 01:16:45.048780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.849 [2024-07-26 01:16:45.048807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.849 qpair failed and we were unable to recover it. 00:34:14.849 [2024-07-26 01:16:45.048944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.849 [2024-07-26 01:16:45.048972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.849 qpair failed and we were unable to recover it. 00:34:14.849 [2024-07-26 01:16:45.049139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.849 [2024-07-26 01:16:45.049166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.849 qpair failed and we were unable to recover it. 00:34:14.849 [2024-07-26 01:16:45.049302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.849 [2024-07-26 01:16:45.049330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.849 qpair failed and we were unable to recover it. 00:34:14.849 [2024-07-26 01:16:45.049480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.849 [2024-07-26 01:16:45.049508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.849 qpair failed and we were unable to recover it. 00:34:14.849 [2024-07-26 01:16:45.049675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.849 [2024-07-26 01:16:45.049703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.849 qpair failed and we were unable to recover it. 00:34:14.849 [2024-07-26 01:16:45.049861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.849 [2024-07-26 01:16:45.049889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.849 qpair failed and we were unable to recover it. 00:34:14.849 [2024-07-26 01:16:45.050008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.849 [2024-07-26 01:16:45.050036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.849 qpair failed and we were unable to recover it. 00:34:14.849 [2024-07-26 01:16:45.050182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.849 [2024-07-26 01:16:45.050211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.849 qpair failed and we were unable to recover it. 00:34:14.849 [2024-07-26 01:16:45.050368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.849 [2024-07-26 01:16:45.050395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.849 qpair failed and we were unable to recover it. 00:34:14.849 [2024-07-26 01:16:45.050526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.849 [2024-07-26 01:16:45.050553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.849 qpair failed and we were unable to recover it. 00:34:14.849 [2024-07-26 01:16:45.050723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.849 [2024-07-26 01:16:45.050750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.849 qpair failed and we were unable to recover it. 00:34:14.849 [2024-07-26 01:16:45.050890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.849 [2024-07-26 01:16:45.050917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.849 qpair failed and we were unable to recover it. 00:34:14.849 [2024-07-26 01:16:45.051057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.849 [2024-07-26 01:16:45.051089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.849 qpair failed and we were unable to recover it. 00:34:14.849 [2024-07-26 01:16:45.051256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.849 [2024-07-26 01:16:45.051284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.849 qpair failed and we were unable to recover it. 00:34:14.849 [2024-07-26 01:16:45.051427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.849 [2024-07-26 01:16:45.051458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.849 qpair failed and we were unable to recover it. 00:34:14.849 [2024-07-26 01:16:45.051570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.849 [2024-07-26 01:16:45.051597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.849 qpair failed and we were unable to recover it. 00:34:14.849 [2024-07-26 01:16:45.051766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.849 [2024-07-26 01:16:45.051793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.849 qpair failed and we were unable to recover it. 00:34:14.849 [2024-07-26 01:16:45.051917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.849 [2024-07-26 01:16:45.051945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.849 qpair failed and we were unable to recover it. 00:34:14.849 [2024-07-26 01:16:45.052047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.849 [2024-07-26 01:16:45.052089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.849 qpair failed and we were unable to recover it. 00:34:14.849 [2024-07-26 01:16:45.052256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.849 [2024-07-26 01:16:45.052283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.849 qpair failed and we were unable to recover it. 00:34:14.849 [2024-07-26 01:16:45.052420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.849 [2024-07-26 01:16:45.052447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.849 qpair failed and we were unable to recover it. 00:34:14.849 [2024-07-26 01:16:45.052557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.849 [2024-07-26 01:16:45.052583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.849 qpair failed and we were unable to recover it. 00:34:14.849 [2024-07-26 01:16:45.052723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.849 [2024-07-26 01:16:45.052752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.849 qpair failed and we were unable to recover it. 00:34:14.849 [2024-07-26 01:16:45.052896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.849 [2024-07-26 01:16:45.052922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.849 qpair failed and we were unable to recover it. 00:34:14.849 [2024-07-26 01:16:45.053038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.849 [2024-07-26 01:16:45.053072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.849 qpair failed and we were unable to recover it. 00:34:14.849 [2024-07-26 01:16:45.053193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.849 [2024-07-26 01:16:45.053220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.849 qpair failed and we were unable to recover it. 00:34:14.849 [2024-07-26 01:16:45.053351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.849 [2024-07-26 01:16:45.053378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.849 qpair failed and we were unable to recover it. 00:34:14.849 [2024-07-26 01:16:45.053510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.849 [2024-07-26 01:16:45.053538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.849 qpair failed and we were unable to recover it. 00:34:14.849 [2024-07-26 01:16:45.053684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.849 [2024-07-26 01:16:45.053710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.849 qpair failed and we were unable to recover it. 00:34:14.849 [2024-07-26 01:16:45.053832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.849 [2024-07-26 01:16:45.053858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.849 qpair failed and we were unable to recover it. 00:34:14.849 [2024-07-26 01:16:45.053998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.850 [2024-07-26 01:16:45.054025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.850 qpair failed and we were unable to recover it. 00:34:14.850 [2024-07-26 01:16:45.054167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.850 [2024-07-26 01:16:45.054194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.850 qpair failed and we were unable to recover it. 00:34:14.850 [2024-07-26 01:16:45.054330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.850 [2024-07-26 01:16:45.054356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.850 qpair failed and we were unable to recover it. 00:34:14.850 [2024-07-26 01:16:45.054472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.850 [2024-07-26 01:16:45.054498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.850 qpair failed and we were unable to recover it. 00:34:14.850 [2024-07-26 01:16:45.054658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.850 [2024-07-26 01:16:45.054686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.850 qpair failed and we were unable to recover it. 00:34:14.850 [2024-07-26 01:16:45.054846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.850 [2024-07-26 01:16:45.054873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.850 qpair failed and we were unable to recover it. 00:34:14.850 [2024-07-26 01:16:45.054984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.850 [2024-07-26 01:16:45.055012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.850 qpair failed and we were unable to recover it. 00:34:14.850 [2024-07-26 01:16:45.055126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.850 [2024-07-26 01:16:45.055152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.850 qpair failed and we were unable to recover it. 00:34:14.850 [2024-07-26 01:16:45.055317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.850 [2024-07-26 01:16:45.055344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.850 qpair failed and we were unable to recover it. 00:34:14.850 [2024-07-26 01:16:45.055479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.850 [2024-07-26 01:16:45.055505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.850 qpair failed and we were unable to recover it. 00:34:14.850 [2024-07-26 01:16:45.055672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.850 [2024-07-26 01:16:45.055699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.850 qpair failed and we were unable to recover it. 00:34:14.850 [2024-07-26 01:16:45.055841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.850 [2024-07-26 01:16:45.055868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.850 qpair failed and we were unable to recover it. 00:34:14.850 [2024-07-26 01:16:45.055985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.850 [2024-07-26 01:16:45.056014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.850 qpair failed and we were unable to recover it. 00:34:14.850 [2024-07-26 01:16:45.056140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.850 [2024-07-26 01:16:45.056169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.850 qpair failed and we were unable to recover it. 00:34:14.850 [2024-07-26 01:16:45.056308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.850 [2024-07-26 01:16:45.056335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.850 qpair failed and we were unable to recover it. 00:34:14.850 [2024-07-26 01:16:45.056464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.850 [2024-07-26 01:16:45.056490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.850 qpair failed and we were unable to recover it. 00:34:14.850 [2024-07-26 01:16:45.056642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.850 [2024-07-26 01:16:45.056669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.850 qpair failed and we were unable to recover it. 00:34:14.850 [2024-07-26 01:16:45.056801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.850 [2024-07-26 01:16:45.056828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.850 qpair failed and we were unable to recover it. 00:34:14.850 [2024-07-26 01:16:45.056973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.850 [2024-07-26 01:16:45.057000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.850 qpair failed and we were unable to recover it. 00:34:14.850 [2024-07-26 01:16:45.057119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.850 [2024-07-26 01:16:45.057146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.850 qpair failed and we were unable to recover it. 00:34:14.850 [2024-07-26 01:16:45.057314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.850 [2024-07-26 01:16:45.057341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.850 qpair failed and we were unable to recover it. 00:34:14.850 [2024-07-26 01:16:45.057475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.850 [2024-07-26 01:16:45.057502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.850 qpair failed and we were unable to recover it. 00:34:14.850 [2024-07-26 01:16:45.057623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.850 [2024-07-26 01:16:45.057650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.850 qpair failed and we were unable to recover it. 00:34:14.850 [2024-07-26 01:16:45.057764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.850 [2024-07-26 01:16:45.057791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.850 qpair failed and we were unable to recover it. 00:34:14.850 [2024-07-26 01:16:45.057931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.850 [2024-07-26 01:16:45.057957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.850 qpair failed and we were unable to recover it. 00:34:14.850 [2024-07-26 01:16:45.058097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.850 [2024-07-26 01:16:45.058137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.850 qpair failed and we were unable to recover it. 00:34:14.850 [2024-07-26 01:16:45.058320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.850 [2024-07-26 01:16:45.058353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.850 qpair failed and we were unable to recover it. 00:34:14.850 [2024-07-26 01:16:45.058499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.850 [2024-07-26 01:16:45.058532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.850 qpair failed and we were unable to recover it. 00:34:14.850 [2024-07-26 01:16:45.058710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.850 [2024-07-26 01:16:45.058743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.850 qpair failed and we were unable to recover it. 00:34:14.850 [2024-07-26 01:16:45.058892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.850 [2024-07-26 01:16:45.058923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.850 qpair failed and we were unable to recover it. 00:34:14.850 [2024-07-26 01:16:45.059074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.850 [2024-07-26 01:16:45.059105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.850 qpair failed and we were unable to recover it. 00:34:14.850 [2024-07-26 01:16:45.059221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.850 [2024-07-26 01:16:45.059249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.850 qpair failed and we were unable to recover it. 00:34:14.850 [2024-07-26 01:16:45.059356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.850 [2024-07-26 01:16:45.059383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.850 qpair failed and we were unable to recover it. 00:34:14.850 [2024-07-26 01:16:45.059483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.850 [2024-07-26 01:16:45.059510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.851 qpair failed and we were unable to recover it. 00:34:14.851 [2024-07-26 01:16:45.059639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.851 [2024-07-26 01:16:45.059665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.851 qpair failed and we were unable to recover it. 00:34:14.851 [2024-07-26 01:16:45.059807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.851 [2024-07-26 01:16:45.059833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.851 qpair failed and we were unable to recover it. 00:34:14.851 [2024-07-26 01:16:45.059943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.851 [2024-07-26 01:16:45.059971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.851 qpair failed and we were unable to recover it. 00:34:14.851 [2024-07-26 01:16:45.060086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.851 [2024-07-26 01:16:45.060113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.851 qpair failed and we were unable to recover it. 00:34:14.851 [2024-07-26 01:16:45.060244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.851 [2024-07-26 01:16:45.060270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.851 qpair failed and we were unable to recover it. 00:34:14.851 [2024-07-26 01:16:45.060381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.851 [2024-07-26 01:16:45.060407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.851 qpair failed and we were unable to recover it. 00:34:14.851 [2024-07-26 01:16:45.060556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.851 [2024-07-26 01:16:45.060582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.851 qpair failed and we were unable to recover it. 00:34:14.851 [2024-07-26 01:16:45.060716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.851 [2024-07-26 01:16:45.060743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.851 qpair failed and we were unable to recover it. 00:34:14.851 [2024-07-26 01:16:45.060881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.851 [2024-07-26 01:16:45.060908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.851 qpair failed and we were unable to recover it. 00:34:14.851 [2024-07-26 01:16:45.061089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.851 [2024-07-26 01:16:45.061122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.851 qpair failed and we were unable to recover it. 00:34:14.851 [2024-07-26 01:16:45.061286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.851 [2024-07-26 01:16:45.061313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.851 qpair failed and we were unable to recover it. 00:34:14.851 [2024-07-26 01:16:45.061429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.851 [2024-07-26 01:16:45.061455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.851 qpair failed and we were unable to recover it. 00:34:14.851 [2024-07-26 01:16:45.061596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.851 [2024-07-26 01:16:45.061622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.851 qpair failed and we were unable to recover it. 00:34:14.851 [2024-07-26 01:16:45.061752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.851 [2024-07-26 01:16:45.061778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.851 qpair failed and we were unable to recover it. 00:34:14.851 [2024-07-26 01:16:45.061902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.851 [2024-07-26 01:16:45.061928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.851 qpair failed and we were unable to recover it. 00:34:14.851 [2024-07-26 01:16:45.062105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.851 [2024-07-26 01:16:45.062131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.851 qpair failed and we were unable to recover it. 00:34:14.851 [2024-07-26 01:16:45.062297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.851 [2024-07-26 01:16:45.062331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.851 qpair failed and we were unable to recover it. 00:34:14.851 [2024-07-26 01:16:45.062462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.851 [2024-07-26 01:16:45.062488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.851 qpair failed and we were unable to recover it. 00:34:14.851 [2024-07-26 01:16:45.062657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.851 [2024-07-26 01:16:45.062689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.851 qpair failed and we were unable to recover it. 00:34:14.851 [2024-07-26 01:16:45.062835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.851 [2024-07-26 01:16:45.062861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.851 qpair failed and we were unable to recover it. 00:34:14.851 [2024-07-26 01:16:45.062976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.851 [2024-07-26 01:16:45.063003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.851 qpair failed and we were unable to recover it. 00:34:14.851 [2024-07-26 01:16:45.063150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.851 [2024-07-26 01:16:45.063178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.851 qpair failed and we were unable to recover it. 00:34:14.851 [2024-07-26 01:16:45.063317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.851 [2024-07-26 01:16:45.063346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.851 qpair failed and we were unable to recover it. 00:34:14.851 [2024-07-26 01:16:45.063484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.851 [2024-07-26 01:16:45.063511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.851 qpair failed and we were unable to recover it. 00:34:14.851 [2024-07-26 01:16:45.063641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.851 [2024-07-26 01:16:45.063667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.851 qpair failed and we were unable to recover it. 00:34:14.851 [2024-07-26 01:16:45.063770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.851 [2024-07-26 01:16:45.063796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.851 qpair failed and we were unable to recover it. 00:34:14.851 [2024-07-26 01:16:45.063933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.851 [2024-07-26 01:16:45.063960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.851 qpair failed and we were unable to recover it. 00:34:14.851 [2024-07-26 01:16:45.064080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.851 [2024-07-26 01:16:45.064113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.851 qpair failed and we were unable to recover it. 00:34:14.851 [2024-07-26 01:16:45.064252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.851 [2024-07-26 01:16:45.064279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.851 qpair failed and we were unable to recover it. 00:34:14.851 [2024-07-26 01:16:45.064382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.851 [2024-07-26 01:16:45.064408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.851 qpair failed and we were unable to recover it. 00:34:14.851 [2024-07-26 01:16:45.064551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.851 [2024-07-26 01:16:45.064577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.851 qpair failed and we were unable to recover it. 00:34:14.851 [2024-07-26 01:16:45.064737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.851 [2024-07-26 01:16:45.064763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.851 qpair failed and we were unable to recover it. 00:34:14.851 [2024-07-26 01:16:45.064899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.851 [2024-07-26 01:16:45.064926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.851 qpair failed and we were unable to recover it. 00:34:14.851 [2024-07-26 01:16:45.065070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.851 [2024-07-26 01:16:45.065106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.851 qpair failed and we were unable to recover it. 00:34:14.851 [2024-07-26 01:16:45.065213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.851 [2024-07-26 01:16:45.065239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.851 qpair failed and we were unable to recover it. 00:34:14.851 [2024-07-26 01:16:45.065390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.851 [2024-07-26 01:16:45.065416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.851 qpair failed and we were unable to recover it. 00:34:14.851 [2024-07-26 01:16:45.065548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.851 [2024-07-26 01:16:45.065575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.851 qpair failed and we were unable to recover it. 00:34:14.851 [2024-07-26 01:16:45.065707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.851 [2024-07-26 01:16:45.065734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.851 qpair failed and we were unable to recover it. 00:34:14.852 [2024-07-26 01:16:45.065873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.852 [2024-07-26 01:16:45.065900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.852 qpair failed and we were unable to recover it. 00:34:14.852 [2024-07-26 01:16:45.066034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.852 [2024-07-26 01:16:45.066066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.852 qpair failed and we were unable to recover it. 00:34:14.852 [2024-07-26 01:16:45.066188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.852 [2024-07-26 01:16:45.066214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.852 qpair failed and we were unable to recover it. 00:34:14.852 [2024-07-26 01:16:45.066365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.852 [2024-07-26 01:16:45.066391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.852 qpair failed and we were unable to recover it. 00:34:14.852 [2024-07-26 01:16:45.066527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.852 [2024-07-26 01:16:45.066553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.852 qpair failed and we were unable to recover it. 00:34:14.852 [2024-07-26 01:16:45.066690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.852 [2024-07-26 01:16:45.066717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.852 qpair failed and we were unable to recover it. 00:34:14.852 [2024-07-26 01:16:45.066835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.852 [2024-07-26 01:16:45.066862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.852 qpair failed and we were unable to recover it. 00:34:14.852 [2024-07-26 01:16:45.066971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.852 [2024-07-26 01:16:45.067001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.852 qpair failed and we were unable to recover it. 00:34:14.852 [2024-07-26 01:16:45.067126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.852 [2024-07-26 01:16:45.067153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.852 qpair failed and we were unable to recover it. 00:34:14.852 [2024-07-26 01:16:45.067288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.852 [2024-07-26 01:16:45.067314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.852 qpair failed and we were unable to recover it. 00:34:14.852 [2024-07-26 01:16:45.067418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.852 [2024-07-26 01:16:45.067444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.852 qpair failed and we were unable to recover it. 00:34:14.852 [2024-07-26 01:16:45.067540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.852 [2024-07-26 01:16:45.067566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.852 qpair failed and we were unable to recover it. 00:34:14.852 [2024-07-26 01:16:45.067699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.852 [2024-07-26 01:16:45.067726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.852 qpair failed and we were unable to recover it. 00:34:14.852 [2024-07-26 01:16:45.067831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.852 [2024-07-26 01:16:45.067857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.852 qpair failed and we were unable to recover it. 00:34:14.852 [2024-07-26 01:16:45.067959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.852 [2024-07-26 01:16:45.067985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.852 qpair failed and we were unable to recover it. 00:34:14.852 [2024-07-26 01:16:45.068102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.852 [2024-07-26 01:16:45.068129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.852 qpair failed and we were unable to recover it. 00:34:14.852 [2024-07-26 01:16:45.068236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.852 [2024-07-26 01:16:45.068262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.852 qpair failed and we were unable to recover it. 00:34:14.852 [2024-07-26 01:16:45.068409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.852 [2024-07-26 01:16:45.068436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.852 qpair failed and we were unable to recover it. 00:34:14.852 [2024-07-26 01:16:45.068536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.852 [2024-07-26 01:16:45.068562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.852 qpair failed and we were unable to recover it. 00:34:14.852 [2024-07-26 01:16:45.068665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.852 [2024-07-26 01:16:45.068691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.852 qpair failed and we were unable to recover it. 00:34:14.852 [2024-07-26 01:16:45.068794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.852 [2024-07-26 01:16:45.068820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.852 qpair failed and we were unable to recover it. 00:34:14.852 [2024-07-26 01:16:45.068952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.852 [2024-07-26 01:16:45.068994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.852 qpair failed and we were unable to recover it. 00:34:14.852 [2024-07-26 01:16:45.069153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.852 [2024-07-26 01:16:45.069183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.852 qpair failed and we were unable to recover it. 00:34:14.852 [2024-07-26 01:16:45.069198] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:14.852 [2024-07-26 01:16:45.069234] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:14.852 [2024-07-26 01:16:45.069249] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:14.852 [2024-07-26 01:16:45.069267] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:14.852 [2024-07-26 01:16:45.069278] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:14.852 [2024-07-26 01:16:45.069307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.852 [2024-07-26 01:16:45.069338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.852 qpair failed and we were unable to recover it. 00:34:14.852 [2024-07-26 01:16:45.069355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:34:14.852 [2024-07-26 01:16:45.069405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:34:14.852 [2024-07-26 01:16:45.069474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.852 [2024-07-26 01:16:45.069500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.852 qpair failed and we were unable to recover it. 00:34:14.852 [2024-07-26 01:16:45.069453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:34:14.852 [2024-07-26 01:16:45.069455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:34:14.852 [2024-07-26 01:16:45.069636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.852 [2024-07-26 01:16:45.069661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.852 qpair failed and we were unable to recover it. 00:34:14.852 [2024-07-26 01:16:45.069764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.852 [2024-07-26 01:16:45.069792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.852 qpair failed and we were unable to recover it. 00:34:14.852 [2024-07-26 01:16:45.069900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.852 [2024-07-26 01:16:45.069928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.852 qpair failed and we were unable to recover it. 00:34:14.852 [2024-07-26 01:16:45.070040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.852 [2024-07-26 01:16:45.070072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.852 qpair failed and we were unable to recover it. 00:34:14.852 [2024-07-26 01:16:45.070185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.852 [2024-07-26 01:16:45.070211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.852 qpair failed and we were unable to recover it. 00:34:14.852 [2024-07-26 01:16:45.070345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.852 [2024-07-26 01:16:45.070371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.852 qpair failed and we were unable to recover it. 00:34:14.852 [2024-07-26 01:16:45.070485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.852 [2024-07-26 01:16:45.070516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.852 qpair failed and we were unable to recover it. 00:34:14.852 [2024-07-26 01:16:45.070627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.852 [2024-07-26 01:16:45.070654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.852 qpair failed and we were unable to recover it. 00:34:14.852 [2024-07-26 01:16:45.070771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.852 [2024-07-26 01:16:45.070797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.852 qpair failed and we were unable to recover it. 00:34:14.852 [2024-07-26 01:16:45.070903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.852 [2024-07-26 01:16:45.070929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.853 qpair failed and we were unable to recover it. 00:34:14.853 [2024-07-26 01:16:45.071045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-26 01:16:45.071077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.853 qpair failed and we were unable to recover it. 00:34:14.853 [2024-07-26 01:16:45.071190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-26 01:16:45.071217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.853 qpair failed and we were unable to recover it. 00:34:14.853 [2024-07-26 01:16:45.071332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-26 01:16:45.071358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.853 qpair failed and we were unable to recover it. 00:34:14.853 [2024-07-26 01:16:45.071498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-26 01:16:45.071525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.853 qpair failed and we were unable to recover it. 00:34:14.853 [2024-07-26 01:16:45.071632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-26 01:16:45.071658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.853 qpair failed and we were unable to recover it. 00:34:14.853 [2024-07-26 01:16:45.071791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-26 01:16:45.071817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.853 qpair failed and we were unable to recover it. 00:34:14.853 [2024-07-26 01:16:45.071950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-26 01:16:45.071977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.853 qpair failed and we were unable to recover it. 00:34:14.853 [2024-07-26 01:16:45.072086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-26 01:16:45.072116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.853 qpair failed and we were unable to recover it. 00:34:14.853 [2024-07-26 01:16:45.072255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-26 01:16:45.072282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.853 qpair failed and we were unable to recover it. 00:34:14.853 [2024-07-26 01:16:45.072431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-26 01:16:45.072458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.853 qpair failed and we were unable to recover it. 00:34:14.853 [2024-07-26 01:16:45.072588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-26 01:16:45.072615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.853 qpair failed and we were unable to recover it. 00:34:14.853 [2024-07-26 01:16:45.072724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-26 01:16:45.072752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.853 qpair failed and we were unable to recover it. 00:34:14.853 [2024-07-26 01:16:45.072893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-26 01:16:45.072920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.853 qpair failed and we were unable to recover it. 00:34:14.853 [2024-07-26 01:16:45.073063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-26 01:16:45.073103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.853 qpair failed and we were unable to recover it. 00:34:14.853 [2024-07-26 01:16:45.073219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-26 01:16:45.073245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.853 qpair failed and we were unable to recover it. 00:34:14.853 [2024-07-26 01:16:45.073359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-26 01:16:45.073385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.853 qpair failed and we were unable to recover it. 00:34:14.853 [2024-07-26 01:16:45.073495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-26 01:16:45.073522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.853 qpair failed and we were unable to recover it. 00:34:14.853 [2024-07-26 01:16:45.073662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-26 01:16:45.073688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.853 qpair failed and we were unable to recover it. 00:34:14.853 [2024-07-26 01:16:45.073802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-26 01:16:45.073827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.853 qpair failed and we were unable to recover it. 00:34:14.853 [2024-07-26 01:16:45.073943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-26 01:16:45.073969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.853 qpair failed and we were unable to recover it. 00:34:14.853 [2024-07-26 01:16:45.074076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-26 01:16:45.074110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.853 qpair failed and we were unable to recover it. 00:34:14.853 [2024-07-26 01:16:45.074216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-26 01:16:45.074242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.853 qpair failed and we were unable to recover it. 00:34:14.853 [2024-07-26 01:16:45.074353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-26 01:16:45.074379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.853 qpair failed and we were unable to recover it. 00:34:14.853 [2024-07-26 01:16:45.074537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-26 01:16:45.074567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.853 qpair failed and we were unable to recover it. 00:34:14.853 [2024-07-26 01:16:45.074705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-26 01:16:45.074731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.853 qpair failed and we were unable to recover it. 00:34:14.853 [2024-07-26 01:16:45.074834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-26 01:16:45.074860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.853 qpair failed and we were unable to recover it. 00:34:14.853 [2024-07-26 01:16:45.074992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-26 01:16:45.075018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.853 qpair failed and we were unable to recover it. 00:34:14.853 [2024-07-26 01:16:45.075136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-26 01:16:45.075162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.853 qpair failed and we were unable to recover it. 00:34:14.853 [2024-07-26 01:16:45.075262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-26 01:16:45.075288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.853 qpair failed and we were unable to recover it. 00:34:14.853 [2024-07-26 01:16:45.075408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-26 01:16:45.075434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.853 qpair failed and we were unable to recover it. 00:34:14.853 [2024-07-26 01:16:45.075595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-26 01:16:45.075621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.853 qpair failed and we were unable to recover it. 00:34:14.853 [2024-07-26 01:16:45.075722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-26 01:16:45.075748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.853 qpair failed and we were unable to recover it. 00:34:14.853 [2024-07-26 01:16:45.075858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-26 01:16:45.075884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.853 qpair failed and we were unable to recover it. 00:34:14.853 [2024-07-26 01:16:45.075992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-26 01:16:45.076018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.853 qpair failed and we were unable to recover it. 00:34:14.853 [2024-07-26 01:16:45.076162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-26 01:16:45.076188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.853 qpair failed and we were unable to recover it. 00:34:14.853 [2024-07-26 01:16:45.076305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-26 01:16:45.076331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.853 qpair failed and we were unable to recover it. 00:34:14.853 [2024-07-26 01:16:45.076482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.853 [2024-07-26 01:16:45.076508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.853 qpair failed and we were unable to recover it. 00:34:14.853 [2024-07-26 01:16:45.076650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-26 01:16:45.076677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.854 qpair failed and we were unable to recover it. 00:34:14.854 [2024-07-26 01:16:45.076791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-26 01:16:45.076818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.854 qpair failed and we were unable to recover it. 00:34:14.854 [2024-07-26 01:16:45.076961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-26 01:16:45.076987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.854 qpair failed and we were unable to recover it. 00:34:14.854 [2024-07-26 01:16:45.077123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-26 01:16:45.077151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.854 qpair failed and we were unable to recover it. 00:34:14.854 [2024-07-26 01:16:45.077285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-26 01:16:45.077321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.854 qpair failed and we were unable to recover it. 00:34:14.854 [2024-07-26 01:16:45.077430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-26 01:16:45.077457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.854 qpair failed and we were unable to recover it. 00:34:14.854 [2024-07-26 01:16:45.077556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-26 01:16:45.077582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.854 qpair failed and we were unable to recover it. 00:34:14.854 [2024-07-26 01:16:45.077701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-26 01:16:45.077728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.854 qpair failed and we were unable to recover it. 00:34:14.854 [2024-07-26 01:16:45.077832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-26 01:16:45.077858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.854 qpair failed and we were unable to recover it. 00:34:14.854 [2024-07-26 01:16:45.077965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-26 01:16:45.077991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.854 qpair failed and we were unable to recover it. 00:34:14.854 [2024-07-26 01:16:45.078107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-26 01:16:45.078134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.854 qpair failed and we were unable to recover it. 00:34:14.854 [2024-07-26 01:16:45.078273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-26 01:16:45.078299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.854 qpair failed and we were unable to recover it. 00:34:14.854 [2024-07-26 01:16:45.078419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-26 01:16:45.078445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.854 qpair failed and we were unable to recover it. 00:34:14.854 [2024-07-26 01:16:45.078583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-26 01:16:45.078609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.854 qpair failed and we were unable to recover it. 00:34:14.854 [2024-07-26 01:16:45.078723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-26 01:16:45.078749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.854 qpair failed and we were unable to recover it. 00:34:14.854 [2024-07-26 01:16:45.078853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-26 01:16:45.078879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.854 qpair failed and we were unable to recover it. 00:34:14.854 [2024-07-26 01:16:45.078982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-26 01:16:45.079008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.854 qpair failed and we were unable to recover it. 00:34:14.854 [2024-07-26 01:16:45.079121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-26 01:16:45.079153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.854 qpair failed and we were unable to recover it. 00:34:14.854 [2024-07-26 01:16:45.079292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-26 01:16:45.079325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.854 qpair failed and we were unable to recover it. 00:34:14.854 [2024-07-26 01:16:45.079461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-26 01:16:45.079487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.854 qpair failed and we were unable to recover it. 00:34:14.854 [2024-07-26 01:16:45.079595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-26 01:16:45.079621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.854 qpair failed and we were unable to recover it. 00:34:14.854 [2024-07-26 01:16:45.079740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-26 01:16:45.079783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.854 qpair failed and we were unable to recover it. 00:34:14.854 [2024-07-26 01:16:45.079924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-26 01:16:45.079952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.854 qpair failed and we were unable to recover it. 00:34:14.854 [2024-07-26 01:16:45.080070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-26 01:16:45.080109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.854 qpair failed and we were unable to recover it. 00:34:14.854 [2024-07-26 01:16:45.080243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-26 01:16:45.080270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.854 qpair failed and we were unable to recover it. 00:34:14.854 [2024-07-26 01:16:45.080389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-26 01:16:45.080416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.854 qpair failed and we were unable to recover it. 00:34:14.854 [2024-07-26 01:16:45.080527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-26 01:16:45.080555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.854 qpair failed and we were unable to recover it. 00:34:14.854 [2024-07-26 01:16:45.080666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-26 01:16:45.080694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.854 qpair failed and we were unable to recover it. 00:34:14.854 [2024-07-26 01:16:45.080829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-26 01:16:45.080855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.854 qpair failed and we were unable to recover it. 00:34:14.854 [2024-07-26 01:16:45.080974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-26 01:16:45.081001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.854 qpair failed and we were unable to recover it. 00:34:14.854 [2024-07-26 01:16:45.081116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-26 01:16:45.081143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.854 qpair failed and we were unable to recover it. 00:34:14.854 [2024-07-26 01:16:45.081251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-26 01:16:45.081277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.854 qpair failed and we were unable to recover it. 00:34:14.854 [2024-07-26 01:16:45.081440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-26 01:16:45.081467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.854 qpair failed and we were unable to recover it. 00:34:14.854 [2024-07-26 01:16:45.081570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.854 [2024-07-26 01:16:45.081599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.855 qpair failed and we were unable to recover it. 00:34:14.855 [2024-07-26 01:16:45.081719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.855 [2024-07-26 01:16:45.081747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.855 qpair failed and we were unable to recover it. 00:34:14.855 [2024-07-26 01:16:45.081885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.855 [2024-07-26 01:16:45.081912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.855 qpair failed and we were unable to recover it. 00:34:14.855 [2024-07-26 01:16:45.082099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.855 [2024-07-26 01:16:45.082127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.855 qpair failed and we were unable to recover it. 00:34:14.855 [2024-07-26 01:16:45.082241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.855 [2024-07-26 01:16:45.082267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.855 qpair failed and we were unable to recover it. 00:34:14.855 [2024-07-26 01:16:45.082430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.855 [2024-07-26 01:16:45.082457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.855 qpair failed and we were unable to recover it. 00:34:14.855 [2024-07-26 01:16:45.082568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.855 [2024-07-26 01:16:45.082595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.855 qpair failed and we were unable to recover it. 00:34:14.855 [2024-07-26 01:16:45.082742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.855 [2024-07-26 01:16:45.082770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.855 qpair failed and we were unable to recover it. 00:34:14.855 [2024-07-26 01:16:45.082877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.855 [2024-07-26 01:16:45.082904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.855 qpair failed and we were unable to recover it. 00:34:14.855 [2024-07-26 01:16:45.083006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.855 [2024-07-26 01:16:45.083034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.855 qpair failed and we were unable to recover it. 00:34:14.855 [2024-07-26 01:16:45.083158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.855 [2024-07-26 01:16:45.083186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.855 qpair failed and we were unable to recover it. 00:34:14.855 [2024-07-26 01:16:45.083299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.855 [2024-07-26 01:16:45.083336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.855 qpair failed and we were unable to recover it. 00:34:14.855 [2024-07-26 01:16:45.083469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.855 [2024-07-26 01:16:45.083497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.855 qpair failed and we were unable to recover it. 00:34:14.855 [2024-07-26 01:16:45.083635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.855 [2024-07-26 01:16:45.083663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.855 qpair failed and we were unable to recover it. 00:34:14.855 [2024-07-26 01:16:45.083775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.855 [2024-07-26 01:16:45.083802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.855 qpair failed and we were unable to recover it. 00:34:14.855 [2024-07-26 01:16:45.083932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.855 [2024-07-26 01:16:45.083959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.855 qpair failed and we were unable to recover it. 00:34:14.855 [2024-07-26 01:16:45.084084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.855 [2024-07-26 01:16:45.084120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.855 qpair failed and we were unable to recover it. 00:34:14.855 [2024-07-26 01:16:45.084227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.855 [2024-07-26 01:16:45.084254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.855 qpair failed and we were unable to recover it. 00:34:14.855 [2024-07-26 01:16:45.084371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.855 [2024-07-26 01:16:45.084405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.855 qpair failed and we were unable to recover it. 00:34:14.855 [2024-07-26 01:16:45.084540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.855 [2024-07-26 01:16:45.084568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.855 qpair failed and we were unable to recover it. 00:34:14.855 [2024-07-26 01:16:45.084735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.855 [2024-07-26 01:16:45.084762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.855 qpair failed and we were unable to recover it. 00:34:14.855 [2024-07-26 01:16:45.084874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.855 [2024-07-26 01:16:45.084903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.855 qpair failed and we were unable to recover it. 00:34:14.855 [2024-07-26 01:16:45.085035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.855 [2024-07-26 01:16:45.085073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.855 qpair failed and we were unable to recover it. 00:34:14.855 [2024-07-26 01:16:45.085195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.855 [2024-07-26 01:16:45.085222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.855 qpair failed and we were unable to recover it. 00:34:14.855 [2024-07-26 01:16:45.085352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.855 [2024-07-26 01:16:45.085379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.855 qpair failed and we were unable to recover it. 00:34:14.855 [2024-07-26 01:16:45.085484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.855 [2024-07-26 01:16:45.085512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.855 qpair failed and we were unable to recover it. 00:34:14.855 [2024-07-26 01:16:45.085648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.855 [2024-07-26 01:16:45.085676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.855 qpair failed and we were unable to recover it. 00:34:14.855 [2024-07-26 01:16:45.085812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.855 [2024-07-26 01:16:45.085840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.855 qpair failed and we were unable to recover it. 00:34:14.855 [2024-07-26 01:16:45.085944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.855 [2024-07-26 01:16:45.085971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.855 qpair failed and we were unable to recover it. 00:34:14.855 [2024-07-26 01:16:45.086082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.855 [2024-07-26 01:16:45.086123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.855 qpair failed and we were unable to recover it. 00:34:14.855 [2024-07-26 01:16:45.086251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.855 [2024-07-26 01:16:45.086277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.855 qpair failed and we were unable to recover it. 00:34:14.855 [2024-07-26 01:16:45.086383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.855 [2024-07-26 01:16:45.086410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.855 qpair failed and we were unable to recover it. 00:34:14.855 [2024-07-26 01:16:45.086510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.855 [2024-07-26 01:16:45.086537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.855 qpair failed and we were unable to recover it. 00:34:14.855 [2024-07-26 01:16:45.086642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.855 [2024-07-26 01:16:45.086669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.855 qpair failed and we were unable to recover it. 00:34:14.855 [2024-07-26 01:16:45.086804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.855 [2024-07-26 01:16:45.086835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.855 qpair failed and we were unable to recover it. 00:34:14.855 [2024-07-26 01:16:45.086969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.855 [2024-07-26 01:16:45.086996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.855 qpair failed and we were unable to recover it. 00:34:14.855 [2024-07-26 01:16:45.087112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.855 [2024-07-26 01:16:45.087139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.855 qpair failed and we were unable to recover it. 00:34:14.855 [2024-07-26 01:16:45.087250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.855 [2024-07-26 01:16:45.087275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.855 qpair failed and we were unable to recover it. 00:34:14.856 [2024-07-26 01:16:45.087425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.856 [2024-07-26 01:16:45.087453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.856 qpair failed and we were unable to recover it. 00:34:14.856 [2024-07-26 01:16:45.087585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.856 [2024-07-26 01:16:45.087612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.856 qpair failed and we were unable to recover it. 00:34:14.856 [2024-07-26 01:16:45.087743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.856 [2024-07-26 01:16:45.087770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.856 qpair failed and we were unable to recover it. 00:34:14.856 [2024-07-26 01:16:45.087907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.856 [2024-07-26 01:16:45.087935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.856 qpair failed and we were unable to recover it. 00:34:14.856 [2024-07-26 01:16:45.088065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.856 [2024-07-26 01:16:45.088117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.856 qpair failed and we were unable to recover it. 00:34:14.856 [2024-07-26 01:16:45.088261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.856 [2024-07-26 01:16:45.088288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.856 qpair failed and we were unable to recover it. 00:34:14.856 [2024-07-26 01:16:45.088406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.856 [2024-07-26 01:16:45.088433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.856 qpair failed and we were unable to recover it. 00:34:14.856 [2024-07-26 01:16:45.088537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.856 [2024-07-26 01:16:45.088564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.856 qpair failed and we were unable to recover it. 00:34:14.856 [2024-07-26 01:16:45.088674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.856 [2024-07-26 01:16:45.088702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.856 qpair failed and we were unable to recover it. 00:34:14.856 [2024-07-26 01:16:45.088809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.856 [2024-07-26 01:16:45.088836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.856 qpair failed and we were unable to recover it. 00:34:14.856 [2024-07-26 01:16:45.088953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.856 [2024-07-26 01:16:45.088981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.856 qpair failed and we were unable to recover it. 00:34:14.856 [2024-07-26 01:16:45.089124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.856 [2024-07-26 01:16:45.089151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.856 qpair failed and we were unable to recover it. 00:34:14.856 [2024-07-26 01:16:45.089258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.856 [2024-07-26 01:16:45.089284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.856 qpair failed and we were unable to recover it. 00:34:14.856 [2024-07-26 01:16:45.089447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.856 [2024-07-26 01:16:45.089475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.856 qpair failed and we were unable to recover it. 00:34:14.856 [2024-07-26 01:16:45.089586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.856 [2024-07-26 01:16:45.089614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.856 qpair failed and we were unable to recover it. 00:34:14.856 [2024-07-26 01:16:45.089757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.856 [2024-07-26 01:16:45.089784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.856 qpair failed and we were unable to recover it. 00:34:14.856 [2024-07-26 01:16:45.089919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.856 [2024-07-26 01:16:45.089947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.856 qpair failed and we were unable to recover it. 00:34:14.856 [2024-07-26 01:16:45.090080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.856 [2024-07-26 01:16:45.090115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.856 qpair failed and we were unable to recover it. 00:34:14.856 [2024-07-26 01:16:45.090224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.856 [2024-07-26 01:16:45.090250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.856 qpair failed and we were unable to recover it. 00:34:14.856 [2024-07-26 01:16:45.090361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.856 [2024-07-26 01:16:45.090388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.856 qpair failed and we were unable to recover it. 00:34:14.856 [2024-07-26 01:16:45.090506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.856 [2024-07-26 01:16:45.090533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.856 qpair failed and we were unable to recover it. 00:34:14.856 [2024-07-26 01:16:45.090647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.856 [2024-07-26 01:16:45.090674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.856 qpair failed and we were unable to recover it. 00:34:14.856 [2024-07-26 01:16:45.090788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.856 [2024-07-26 01:16:45.090815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.856 qpair failed and we were unable to recover it. 00:34:14.856 [2024-07-26 01:16:45.090978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.856 [2024-07-26 01:16:45.091005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.856 qpair failed and we were unable to recover it. 00:34:14.856 [2024-07-26 01:16:45.091115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.856 [2024-07-26 01:16:45.091142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.856 qpair failed and we were unable to recover it. 00:34:14.856 [2024-07-26 01:16:45.091254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.856 [2024-07-26 01:16:45.091281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.856 qpair failed and we were unable to recover it. 00:34:14.856 [2024-07-26 01:16:45.091434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.856 [2024-07-26 01:16:45.091461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.856 qpair failed and we were unable to recover it. 00:34:14.856 [2024-07-26 01:16:45.091575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.856 [2024-07-26 01:16:45.091602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.856 qpair failed and we were unable to recover it. 00:34:14.856 [2024-07-26 01:16:45.091712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.856 [2024-07-26 01:16:45.091740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.856 qpair failed and we were unable to recover it. 00:34:14.856 [2024-07-26 01:16:45.091871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.856 [2024-07-26 01:16:45.091897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.856 qpair failed and we were unable to recover it. 00:34:14.856 [2024-07-26 01:16:45.092005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.856 [2024-07-26 01:16:45.092032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.856 qpair failed and we were unable to recover it. 00:34:14.856 [2024-07-26 01:16:45.092159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.856 [2024-07-26 01:16:45.092187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.856 qpair failed and we were unable to recover it. 00:34:14.856 [2024-07-26 01:16:45.092301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.856 [2024-07-26 01:16:45.092338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.856 qpair failed and we were unable to recover it. 00:34:14.856 [2024-07-26 01:16:45.092472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.856 [2024-07-26 01:16:45.092499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.856 qpair failed and we were unable to recover it. 00:34:14.856 [2024-07-26 01:16:45.092635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.856 [2024-07-26 01:16:45.092663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.856 qpair failed and we were unable to recover it. 00:34:14.856 [2024-07-26 01:16:45.092809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.856 [2024-07-26 01:16:45.092837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.856 qpair failed and we were unable to recover it. 00:34:14.856 [2024-07-26 01:16:45.092939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.856 [2024-07-26 01:16:45.092966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.856 qpair failed and we were unable to recover it. 00:34:14.856 [2024-07-26 01:16:45.093112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.856 [2024-07-26 01:16:45.093140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.856 qpair failed and we were unable to recover it. 00:34:14.857 [2024-07-26 01:16:45.093271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.857 [2024-07-26 01:16:45.093298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.857 qpair failed and we were unable to recover it. 00:34:14.857 [2024-07-26 01:16:45.093407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.857 [2024-07-26 01:16:45.093434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.857 qpair failed and we were unable to recover it. 00:34:14.857 [2024-07-26 01:16:45.093542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.857 [2024-07-26 01:16:45.093570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.857 qpair failed and we were unable to recover it. 00:34:14.857 [2024-07-26 01:16:45.093724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.857 [2024-07-26 01:16:45.093751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.857 qpair failed and we were unable to recover it. 00:34:14.857 [2024-07-26 01:16:45.093862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.857 [2024-07-26 01:16:45.093888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.857 qpair failed and we were unable to recover it. 00:34:14.857 [2024-07-26 01:16:45.094021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.857 [2024-07-26 01:16:45.094048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.857 qpair failed and we were unable to recover it. 00:34:14.857 [2024-07-26 01:16:45.094184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.857 [2024-07-26 01:16:45.094211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.857 qpair failed and we were unable to recover it. 00:34:14.857 [2024-07-26 01:16:45.094320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.857 [2024-07-26 01:16:45.094347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.857 qpair failed and we were unable to recover it. 00:34:14.857 [2024-07-26 01:16:45.094454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.857 [2024-07-26 01:16:45.094481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.857 qpair failed and we were unable to recover it. 00:34:14.857 [2024-07-26 01:16:45.094588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.857 [2024-07-26 01:16:45.094615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.857 qpair failed and we were unable to recover it. 00:34:14.857 [2024-07-26 01:16:45.094754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.857 [2024-07-26 01:16:45.094781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.857 qpair failed and we were unable to recover it. 00:34:14.857 [2024-07-26 01:16:45.094890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.857 [2024-07-26 01:16:45.094919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.857 qpair failed and we were unable to recover it. 00:34:14.857 [2024-07-26 01:16:45.095057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.857 [2024-07-26 01:16:45.095095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.857 qpair failed and we were unable to recover it. 00:34:14.857 [2024-07-26 01:16:45.095204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.857 [2024-07-26 01:16:45.095230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.857 qpair failed and we were unable to recover it. 00:34:14.857 [2024-07-26 01:16:45.095342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.857 [2024-07-26 01:16:45.095369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.857 qpair failed and we were unable to recover it. 00:34:14.857 [2024-07-26 01:16:45.095469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.857 [2024-07-26 01:16:45.095496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.857 qpair failed and we were unable to recover it. 00:34:14.857 [2024-07-26 01:16:45.095598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.857 [2024-07-26 01:16:45.095625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.857 qpair failed and we were unable to recover it. 00:34:14.857 [2024-07-26 01:16:45.095764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.857 [2024-07-26 01:16:45.095791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.857 qpair failed and we were unable to recover it. 00:34:14.857 [2024-07-26 01:16:45.095929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.857 [2024-07-26 01:16:45.095956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.857 qpair failed and we were unable to recover it. 00:34:14.857 [2024-07-26 01:16:45.096069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.857 [2024-07-26 01:16:45.096096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.857 qpair failed and we were unable to recover it. 00:34:14.857 [2024-07-26 01:16:45.096228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.857 [2024-07-26 01:16:45.096255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.857 qpair failed and we were unable to recover it. 00:34:14.857 [2024-07-26 01:16:45.096368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.857 [2024-07-26 01:16:45.096396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.857 qpair failed and we were unable to recover it. 00:34:14.857 [2024-07-26 01:16:45.096529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.857 [2024-07-26 01:16:45.096556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.857 qpair failed and we were unable to recover it. 00:34:14.857 [2024-07-26 01:16:45.096689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.857 [2024-07-26 01:16:45.096718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.857 qpair failed and we were unable to recover it. 00:34:14.857 [2024-07-26 01:16:45.096854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.857 [2024-07-26 01:16:45.096881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.857 qpair failed and we were unable to recover it. 00:34:14.857 [2024-07-26 01:16:45.097011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.857 [2024-07-26 01:16:45.097038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.857 qpair failed and we were unable to recover it. 00:34:14.857 [2024-07-26 01:16:45.097163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.857 [2024-07-26 01:16:45.097190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.857 qpair failed and we were unable to recover it. 00:34:14.857 [2024-07-26 01:16:45.097299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.857 [2024-07-26 01:16:45.097335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.857 qpair failed and we were unable to recover it. 00:34:14.857 [2024-07-26 01:16:45.097439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.857 [2024-07-26 01:16:45.097467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.857 qpair failed and we were unable to recover it. 00:34:14.857 [2024-07-26 01:16:45.097571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.857 [2024-07-26 01:16:45.097599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.857 qpair failed and we were unable to recover it. 00:34:14.857 [2024-07-26 01:16:45.097734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.857 [2024-07-26 01:16:45.097761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.857 qpair failed and we were unable to recover it. 00:34:14.857 [2024-07-26 01:16:45.097860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.857 [2024-07-26 01:16:45.097887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.857 qpair failed and we were unable to recover it. 00:34:14.857 [2024-07-26 01:16:45.098030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.857 [2024-07-26 01:16:45.098056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.857 qpair failed and we were unable to recover it. 00:34:14.857 [2024-07-26 01:16:45.098178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.857 [2024-07-26 01:16:45.098204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.857 qpair failed and we were unable to recover it. 00:34:14.857 [2024-07-26 01:16:45.098315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.857 [2024-07-26 01:16:45.098342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.857 qpair failed and we were unable to recover it. 00:34:14.857 [2024-07-26 01:16:45.098452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.857 [2024-07-26 01:16:45.098478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.857 qpair failed and we were unable to recover it. 00:34:14.857 [2024-07-26 01:16:45.098587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.857 [2024-07-26 01:16:45.098614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.857 qpair failed and we were unable to recover it. 00:34:14.857 [2024-07-26 01:16:45.098758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.857 [2024-07-26 01:16:45.098785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.857 qpair failed and we were unable to recover it. 00:34:14.858 [2024-07-26 01:16:45.098920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.858 [2024-07-26 01:16:45.098946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.858 qpair failed and we were unable to recover it. 00:34:14.858 [2024-07-26 01:16:45.099081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.858 [2024-07-26 01:16:45.099124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.858 qpair failed and we were unable to recover it. 00:34:14.858 [2024-07-26 01:16:45.099290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.858 [2024-07-26 01:16:45.099317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.858 qpair failed and we were unable to recover it. 00:34:14.858 [2024-07-26 01:16:45.099459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.858 [2024-07-26 01:16:45.099485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.858 qpair failed and we were unable to recover it. 00:34:14.858 [2024-07-26 01:16:45.099633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.858 [2024-07-26 01:16:45.099660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.858 qpair failed and we were unable to recover it. 00:34:14.858 [2024-07-26 01:16:45.099766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.858 [2024-07-26 01:16:45.099793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.858 qpair failed and we were unable to recover it. 00:34:14.858 [2024-07-26 01:16:45.099939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.858 [2024-07-26 01:16:45.099982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.858 qpair failed and we were unable to recover it. 00:34:14.858 [2024-07-26 01:16:45.100110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.858 [2024-07-26 01:16:45.100139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.858 qpair failed and we were unable to recover it. 00:34:14.858 [2024-07-26 01:16:45.100248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.858 [2024-07-26 01:16:45.100275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.858 qpair failed and we were unable to recover it. 00:34:14.858 [2024-07-26 01:16:45.100399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.858 [2024-07-26 01:16:45.100427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.858 qpair failed and we were unable to recover it. 00:34:14.858 [2024-07-26 01:16:45.100541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.858 [2024-07-26 01:16:45.100568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.858 qpair failed and we were unable to recover it. 00:34:14.858 [2024-07-26 01:16:45.100740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.858 [2024-07-26 01:16:45.100768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.858 qpair failed and we were unable to recover it. 00:34:14.858 [2024-07-26 01:16:45.100876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.858 [2024-07-26 01:16:45.100902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.858 qpair failed and we were unable to recover it. 00:34:14.858 [2024-07-26 01:16:45.101012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.858 [2024-07-26 01:16:45.101039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.858 qpair failed and we were unable to recover it. 00:34:14.858 [2024-07-26 01:16:45.101168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.858 [2024-07-26 01:16:45.101194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.858 qpair failed and we were unable to recover it. 00:34:14.858 [2024-07-26 01:16:45.101338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.858 [2024-07-26 01:16:45.101365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.858 qpair failed and we were unable to recover it. 00:34:14.858 [2024-07-26 01:16:45.101507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.858 [2024-07-26 01:16:45.101534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.858 qpair failed and we were unable to recover it. 00:34:14.858 [2024-07-26 01:16:45.101634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.858 [2024-07-26 01:16:45.101661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.858 qpair failed and we were unable to recover it. 00:34:14.858 [2024-07-26 01:16:45.101767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.858 [2024-07-26 01:16:45.101795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.858 qpair failed and we were unable to recover it. 00:34:14.858 [2024-07-26 01:16:45.101902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.858 [2024-07-26 01:16:45.101930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.858 qpair failed and we were unable to recover it. 00:34:14.858 [2024-07-26 01:16:45.102092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.858 [2024-07-26 01:16:45.102132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.858 qpair failed and we were unable to recover it. 00:34:14.858 [2024-07-26 01:16:45.102246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.858 [2024-07-26 01:16:45.102273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.858 qpair failed and we were unable to recover it. 00:34:14.858 [2024-07-26 01:16:45.102385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.858 [2024-07-26 01:16:45.102412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.858 qpair failed and we were unable to recover it. 00:34:14.858 [2024-07-26 01:16:45.102516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.858 [2024-07-26 01:16:45.102543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.858 qpair failed and we were unable to recover it. 00:34:14.858 [2024-07-26 01:16:45.102675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.858 [2024-07-26 01:16:45.102702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.858 qpair failed and we were unable to recover it. 00:34:14.858 [2024-07-26 01:16:45.102809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.858 [2024-07-26 01:16:45.102836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.858 qpair failed and we were unable to recover it. 00:34:14.858 [2024-07-26 01:16:45.102950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.858 [2024-07-26 01:16:45.102977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.858 qpair failed and we were unable to recover it. 00:34:14.858 [2024-07-26 01:16:45.103123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.858 [2024-07-26 01:16:45.103152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.858 qpair failed and we were unable to recover it. 00:34:14.858 [2024-07-26 01:16:45.103262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.858 [2024-07-26 01:16:45.103294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.858 qpair failed and we were unable to recover it. 00:34:14.858 [2024-07-26 01:16:45.103441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.858 [2024-07-26 01:16:45.103469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.858 qpair failed and we were unable to recover it. 00:34:14.858 [2024-07-26 01:16:45.103581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.858 [2024-07-26 01:16:45.103609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.858 qpair failed and we were unable to recover it. 00:34:14.858 [2024-07-26 01:16:45.103746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.858 [2024-07-26 01:16:45.103774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.858 qpair failed and we were unable to recover it. 00:34:14.858 [2024-07-26 01:16:45.103882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.858 [2024-07-26 01:16:45.103909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.858 qpair failed and we were unable to recover it. 00:34:14.858 [2024-07-26 01:16:45.104018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.858 [2024-07-26 01:16:45.104045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.858 qpair failed and we were unable to recover it. 00:34:14.858 [2024-07-26 01:16:45.104202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.858 [2024-07-26 01:16:45.104228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.858 qpair failed and we were unable to recover it. 00:34:14.858 [2024-07-26 01:16:45.104389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.858 [2024-07-26 01:16:45.104416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.858 qpair failed and we were unable to recover it. 00:34:14.858 [2024-07-26 01:16:45.104564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.858 [2024-07-26 01:16:45.104592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.858 qpair failed and we were unable to recover it. 00:34:14.859 [2024-07-26 01:16:45.104760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.859 [2024-07-26 01:16:45.104787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.859 qpair failed and we were unable to recover it. 00:34:14.859 [2024-07-26 01:16:45.104909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.859 [2024-07-26 01:16:45.104937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.859 qpair failed and we were unable to recover it. 00:34:14.859 [2024-07-26 01:16:45.105074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.859 [2024-07-26 01:16:45.105112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.859 qpair failed and we were unable to recover it. 00:34:14.859 [2024-07-26 01:16:45.105215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.859 [2024-07-26 01:16:45.105241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.859 qpair failed and we were unable to recover it. 00:34:14.859 [2024-07-26 01:16:45.105382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.859 [2024-07-26 01:16:45.105409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.859 qpair failed and we were unable to recover it. 00:34:14.859 [2024-07-26 01:16:45.105527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.859 [2024-07-26 01:16:45.105555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.859 qpair failed and we were unable to recover it. 00:34:14.859 [2024-07-26 01:16:45.105652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.859 [2024-07-26 01:16:45.105679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.859 qpair failed and we were unable to recover it. 00:34:14.859 [2024-07-26 01:16:45.105809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.859 [2024-07-26 01:16:45.105837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.859 qpair failed and we were unable to recover it. 00:34:14.859 [2024-07-26 01:16:45.106025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.859 [2024-07-26 01:16:45.106053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.859 qpair failed and we were unable to recover it. 00:34:14.859 [2024-07-26 01:16:45.106203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.859 [2024-07-26 01:16:45.106229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.859 qpair failed and we were unable to recover it. 00:34:14.859 [2024-07-26 01:16:45.106368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.859 [2024-07-26 01:16:45.106395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.859 qpair failed and we were unable to recover it. 00:34:14.859 [2024-07-26 01:16:45.106504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.859 [2024-07-26 01:16:45.106531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.859 qpair failed and we were unable to recover it. 00:34:14.859 [2024-07-26 01:16:45.106669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.859 [2024-07-26 01:16:45.106697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.859 qpair failed and we were unable to recover it. 00:34:14.859 [2024-07-26 01:16:45.106811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.859 [2024-07-26 01:16:45.106840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.859 qpair failed and we were unable to recover it. 00:34:14.859 [2024-07-26 01:16:45.106950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.859 [2024-07-26 01:16:45.106977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.859 qpair failed and we were unable to recover it. 00:34:14.859 [2024-07-26 01:16:45.107120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.859 [2024-07-26 01:16:45.107147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.859 qpair failed and we were unable to recover it. 00:34:14.859 [2024-07-26 01:16:45.107310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.859 [2024-07-26 01:16:45.107337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.859 qpair failed and we were unable to recover it. 00:34:14.859 [2024-07-26 01:16:45.107445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.859 [2024-07-26 01:16:45.107473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.859 qpair failed and we were unable to recover it. 00:34:14.859 [2024-07-26 01:16:45.107608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.859 [2024-07-26 01:16:45.107636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.859 qpair failed and we were unable to recover it. 00:34:14.859 [2024-07-26 01:16:45.107778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.859 [2024-07-26 01:16:45.107805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.859 qpair failed and we were unable to recover it. 00:34:14.859 [2024-07-26 01:16:45.107905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.859 [2024-07-26 01:16:45.107932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.859 qpair failed and we were unable to recover it. 00:34:14.859 [2024-07-26 01:16:45.108041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.859 [2024-07-26 01:16:45.108075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.859 qpair failed and we were unable to recover it. 00:34:14.859 [2024-07-26 01:16:45.108192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.859 [2024-07-26 01:16:45.108219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.859 qpair failed and we were unable to recover it. 00:34:14.859 [2024-07-26 01:16:45.108395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.859 [2024-07-26 01:16:45.108422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.859 qpair failed and we were unable to recover it. 00:34:14.859 [2024-07-26 01:16:45.108530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.859 [2024-07-26 01:16:45.108557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.859 qpair failed and we were unable to recover it. 00:34:14.859 [2024-07-26 01:16:45.108670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.859 [2024-07-26 01:16:45.108697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.859 qpair failed and we were unable to recover it. 00:34:14.859 [2024-07-26 01:16:45.108803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.859 [2024-07-26 01:16:45.108830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.859 qpair failed and we were unable to recover it. 00:34:14.859 [2024-07-26 01:16:45.108934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.859 [2024-07-26 01:16:45.108961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.859 qpair failed and we were unable to recover it. 00:34:14.859 [2024-07-26 01:16:45.109066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.859 [2024-07-26 01:16:45.109093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.859 qpair failed and we were unable to recover it. 00:34:14.859 [2024-07-26 01:16:45.109239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.859 [2024-07-26 01:16:45.109265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.859 qpair failed and we were unable to recover it. 00:34:14.859 [2024-07-26 01:16:45.109394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.859 [2024-07-26 01:16:45.109422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.859 qpair failed and we were unable to recover it. 00:34:14.859 [2024-07-26 01:16:45.109523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.859 [2024-07-26 01:16:45.109557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.859 qpair failed and we were unable to recover it. 00:34:14.859 [2024-07-26 01:16:45.109694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.859 [2024-07-26 01:16:45.109722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.859 qpair failed and we were unable to recover it. 00:34:14.859 [2024-07-26 01:16:45.109872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.860 [2024-07-26 01:16:45.109900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.860 qpair failed and we were unable to recover it. 00:34:14.860 [2024-07-26 01:16:45.110085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.860 [2024-07-26 01:16:45.110122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.860 qpair failed and we were unable to recover it. 00:34:14.860 [2024-07-26 01:16:45.110269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.860 [2024-07-26 01:16:45.110295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.860 qpair failed and we were unable to recover it. 00:34:14.860 [2024-07-26 01:16:45.110411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.860 [2024-07-26 01:16:45.110439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.860 qpair failed and we were unable to recover it. 00:34:14.860 [2024-07-26 01:16:45.110578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.860 [2024-07-26 01:16:45.110605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.860 qpair failed and we were unable to recover it. 00:34:14.860 [2024-07-26 01:16:45.110734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.860 [2024-07-26 01:16:45.110760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.860 qpair failed and we were unable to recover it. 00:34:14.860 [2024-07-26 01:16:45.110865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.860 [2024-07-26 01:16:45.110892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.860 qpair failed and we were unable to recover it. 00:34:14.860 [2024-07-26 01:16:45.111057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.860 [2024-07-26 01:16:45.111090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.860 qpair failed and we were unable to recover it. 00:34:14.860 [2024-07-26 01:16:45.111195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.860 [2024-07-26 01:16:45.111224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.860 qpair failed and we were unable to recover it. 00:34:14.860 [2024-07-26 01:16:45.111357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.860 [2024-07-26 01:16:45.111385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.860 qpair failed and we were unable to recover it. 00:34:14.860 [2024-07-26 01:16:45.111493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.860 [2024-07-26 01:16:45.111520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.860 qpair failed and we were unable to recover it. 00:34:14.860 [2024-07-26 01:16:45.111658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.860 [2024-07-26 01:16:45.111685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.860 qpair failed and we were unable to recover it. 00:34:14.860 [2024-07-26 01:16:45.111843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.860 [2024-07-26 01:16:45.111870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.860 qpair failed and we were unable to recover it. 00:34:14.860 [2024-07-26 01:16:45.111983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.860 [2024-07-26 01:16:45.112011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.860 qpair failed and we were unable to recover it. 00:34:14.860 [2024-07-26 01:16:45.112126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.860 [2024-07-26 01:16:45.112154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.860 qpair failed and we were unable to recover it. 00:34:14.860 [2024-07-26 01:16:45.112266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.860 [2024-07-26 01:16:45.112293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.860 qpair failed and we were unable to recover it. 00:34:14.860 [2024-07-26 01:16:45.112396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.860 [2024-07-26 01:16:45.112424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.860 qpair failed and we were unable to recover it. 00:34:14.860 [2024-07-26 01:16:45.112561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.860 [2024-07-26 01:16:45.112589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.860 qpair failed and we were unable to recover it. 00:34:14.860 [2024-07-26 01:16:45.112686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.860 [2024-07-26 01:16:45.112713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.860 qpair failed and we were unable to recover it. 00:34:14.860 [2024-07-26 01:16:45.112856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.860 [2024-07-26 01:16:45.112884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.860 qpair failed and we were unable to recover it. 00:34:14.860 [2024-07-26 01:16:45.113010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.860 [2024-07-26 01:16:45.113053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.860 qpair failed and we were unable to recover it. 00:34:14.860 [2024-07-26 01:16:45.113211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.860 [2024-07-26 01:16:45.113240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.860 qpair failed and we were unable to recover it. 00:34:14.860 [2024-07-26 01:16:45.113348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.860 [2024-07-26 01:16:45.113375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.860 qpair failed and we were unable to recover it. 00:34:14.860 [2024-07-26 01:16:45.113508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.860 [2024-07-26 01:16:45.113535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.860 qpair failed and we were unable to recover it. 00:34:14.860 [2024-07-26 01:16:45.113672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.860 [2024-07-26 01:16:45.113699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.860 qpair failed and we were unable to recover it. 00:34:14.860 [2024-07-26 01:16:45.113842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.860 [2024-07-26 01:16:45.113869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.860 qpair failed and we were unable to recover it. 00:34:14.860 [2024-07-26 01:16:45.113968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.860 [2024-07-26 01:16:45.113995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.860 qpair failed and we were unable to recover it. 00:34:14.860 [2024-07-26 01:16:45.114132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.860 [2024-07-26 01:16:45.114160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.860 qpair failed and we were unable to recover it. 00:34:14.860 [2024-07-26 01:16:45.114292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.860 [2024-07-26 01:16:45.114319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.860 qpair failed and we were unable to recover it. 00:34:14.860 [2024-07-26 01:16:45.114456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.860 [2024-07-26 01:16:45.114483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.860 qpair failed and we were unable to recover it. 00:34:14.860 [2024-07-26 01:16:45.114594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.860 [2024-07-26 01:16:45.114620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.860 qpair failed and we were unable to recover it. 00:34:14.860 [2024-07-26 01:16:45.114773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.860 [2024-07-26 01:16:45.114800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.860 qpair failed and we were unable to recover it. 00:34:14.860 [2024-07-26 01:16:45.114904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.860 [2024-07-26 01:16:45.114931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.860 qpair failed and we were unable to recover it. 00:34:14.860 [2024-07-26 01:16:45.115090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.860 [2024-07-26 01:16:45.115117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.860 qpair failed and we were unable to recover it. 00:34:14.860 [2024-07-26 01:16:45.115222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.860 [2024-07-26 01:16:45.115249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.860 qpair failed and we were unable to recover it. 00:34:14.860 [2024-07-26 01:16:45.115394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.860 [2024-07-26 01:16:45.115421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.860 qpair failed and we were unable to recover it. 00:34:14.860 [2024-07-26 01:16:45.115524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.860 [2024-07-26 01:16:45.115551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.860 qpair failed and we were unable to recover it. 00:34:14.860 [2024-07-26 01:16:45.115693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.860 [2024-07-26 01:16:45.115720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.861 qpair failed and we were unable to recover it. 00:34:14.861 [2024-07-26 01:16:45.115871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.861 [2024-07-26 01:16:45.115913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.861 qpair failed and we were unable to recover it. 00:34:14.861 [2024-07-26 01:16:45.116034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.861 [2024-07-26 01:16:45.116069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.861 qpair failed and we were unable to recover it. 00:34:14.861 [2024-07-26 01:16:45.116180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.861 [2024-07-26 01:16:45.116207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.861 qpair failed and we were unable to recover it. 00:34:14.861 [2024-07-26 01:16:45.116311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.861 [2024-07-26 01:16:45.116339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.861 qpair failed and we were unable to recover it. 00:34:14.861 [2024-07-26 01:16:45.116458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.861 [2024-07-26 01:16:45.116485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.861 qpair failed and we were unable to recover it. 00:34:14.861 [2024-07-26 01:16:45.116650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.861 [2024-07-26 01:16:45.116678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.861 qpair failed and we were unable to recover it. 00:34:14.861 [2024-07-26 01:16:45.116811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.861 [2024-07-26 01:16:45.116837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.861 qpair failed and we were unable to recover it. 00:34:14.861 [2024-07-26 01:16:45.116937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.861 [2024-07-26 01:16:45.116964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.861 qpair failed and we were unable to recover it. 00:34:14.861 [2024-07-26 01:16:45.117109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.861 [2024-07-26 01:16:45.117137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.861 qpair failed and we were unable to recover it. 00:34:14.861 [2024-07-26 01:16:45.117278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.861 [2024-07-26 01:16:45.117305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.861 qpair failed and we were unable to recover it. 00:34:14.861 [2024-07-26 01:16:45.117407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.861 [2024-07-26 01:16:45.117435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.861 qpair failed and we were unable to recover it. 00:34:14.861 [2024-07-26 01:16:45.117549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.861 [2024-07-26 01:16:45.117576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.861 qpair failed and we were unable to recover it. 00:34:14.861 [2024-07-26 01:16:45.117741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.861 [2024-07-26 01:16:45.117767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.861 qpair failed and we were unable to recover it. 00:34:14.861 [2024-07-26 01:16:45.117929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.861 [2024-07-26 01:16:45.117957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.861 qpair failed and we were unable to recover it. 00:34:14.861 [2024-07-26 01:16:45.118102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.861 [2024-07-26 01:16:45.118131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.861 qpair failed and we were unable to recover it. 00:34:14.861 [2024-07-26 01:16:45.118236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.861 [2024-07-26 01:16:45.118263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.861 qpair failed and we were unable to recover it. 00:34:14.861 [2024-07-26 01:16:45.118400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.861 [2024-07-26 01:16:45.118428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.861 qpair failed and we were unable to recover it. 00:34:14.861 [2024-07-26 01:16:45.118559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.861 [2024-07-26 01:16:45.118585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.861 qpair failed and we were unable to recover it. 00:34:14.861 [2024-07-26 01:16:45.118696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.861 [2024-07-26 01:16:45.118723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.861 qpair failed and we were unable to recover it. 00:34:14.861 [2024-07-26 01:16:45.118884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.861 [2024-07-26 01:16:45.118911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.861 qpair failed and we were unable to recover it. 00:34:14.861 [2024-07-26 01:16:45.119027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.861 [2024-07-26 01:16:45.119053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.861 qpair failed and we were unable to recover it. 00:34:14.861 [2024-07-26 01:16:45.119168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.861 [2024-07-26 01:16:45.119196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.861 qpair failed and we were unable to recover it. 00:34:14.861 [2024-07-26 01:16:45.119338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.861 [2024-07-26 01:16:45.119365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.861 qpair failed and we were unable to recover it. 00:34:14.861 [2024-07-26 01:16:45.119478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.861 [2024-07-26 01:16:45.119506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.861 qpair failed and we were unable to recover it. 00:34:14.861 [2024-07-26 01:16:45.119613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.861 [2024-07-26 01:16:45.119641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.861 qpair failed and we were unable to recover it. 00:34:14.861 [2024-07-26 01:16:45.119746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.861 [2024-07-26 01:16:45.119774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.861 qpair failed and we were unable to recover it. 00:34:14.861 [2024-07-26 01:16:45.119913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.861 [2024-07-26 01:16:45.119941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.861 qpair failed and we were unable to recover it. 00:34:14.861 [2024-07-26 01:16:45.120094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.861 [2024-07-26 01:16:45.120141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.861 qpair failed and we were unable to recover it. 00:34:14.861 [2024-07-26 01:16:45.120260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.861 [2024-07-26 01:16:45.120288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.861 qpair failed and we were unable to recover it. 00:34:14.861 [2024-07-26 01:16:45.120400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.861 [2024-07-26 01:16:45.120427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.861 qpair failed and we were unable to recover it. 00:34:14.861 [2024-07-26 01:16:45.120569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.861 [2024-07-26 01:16:45.120596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.861 qpair failed and we were unable to recover it. 00:34:14.861 [2024-07-26 01:16:45.120754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.861 [2024-07-26 01:16:45.120781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.861 qpair failed and we were unable to recover it. 00:34:14.861 [2024-07-26 01:16:45.120932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.861 [2024-07-26 01:16:45.120959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.861 qpair failed and we were unable to recover it. 00:34:14.861 [2024-07-26 01:16:45.121094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.861 [2024-07-26 01:16:45.121122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.861 qpair failed and we were unable to recover it. 00:34:14.861 [2024-07-26 01:16:45.121239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.861 [2024-07-26 01:16:45.121267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.861 qpair failed and we were unable to recover it. 00:34:14.861 [2024-07-26 01:16:45.121384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.861 [2024-07-26 01:16:45.121411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.861 qpair failed and we were unable to recover it. 00:34:14.861 [2024-07-26 01:16:45.121567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.861 [2024-07-26 01:16:45.121593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.861 qpair failed and we were unable to recover it. 00:34:14.861 [2024-07-26 01:16:45.121733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.861 [2024-07-26 01:16:45.121760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.861 qpair failed and we were unable to recover it. 00:34:14.861 [2024-07-26 01:16:45.121859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.861 [2024-07-26 01:16:45.121886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.861 qpair failed and we were unable to recover it. 00:34:14.861 [2024-07-26 01:16:45.121986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.861 [2024-07-26 01:16:45.122013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.861 qpair failed and we were unable to recover it. 00:34:14.862 [2024-07-26 01:16:45.122154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.862 [2024-07-26 01:16:45.122182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.862 qpair failed and we were unable to recover it. 00:34:14.862 [2024-07-26 01:16:45.122329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.862 [2024-07-26 01:16:45.122356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.862 qpair failed and we were unable to recover it. 00:34:14.862 [2024-07-26 01:16:45.122462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.862 [2024-07-26 01:16:45.122490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.862 qpair failed and we were unable to recover it. 00:34:14.862 [2024-07-26 01:16:45.122627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.862 [2024-07-26 01:16:45.122654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.862 qpair failed and we were unable to recover it. 00:34:14.862 [2024-07-26 01:16:45.122758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.862 [2024-07-26 01:16:45.122784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.862 qpair failed and we were unable to recover it. 00:34:14.862 [2024-07-26 01:16:45.122926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.862 [2024-07-26 01:16:45.122954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.862 qpair failed and we were unable to recover it. 00:34:14.862 [2024-07-26 01:16:45.123069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.862 [2024-07-26 01:16:45.123096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.862 qpair failed and we were unable to recover it. 00:34:14.862 [2024-07-26 01:16:45.123210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.862 [2024-07-26 01:16:45.123237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.862 qpair failed and we were unable to recover it. 00:34:14.862 [2024-07-26 01:16:45.123339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.862 [2024-07-26 01:16:45.123366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.862 qpair failed and we were unable to recover it. 00:34:14.862 [2024-07-26 01:16:45.123499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.862 [2024-07-26 01:16:45.123526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.862 qpair failed and we were unable to recover it. 00:34:14.862 [2024-07-26 01:16:45.123628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.862 [2024-07-26 01:16:45.123654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.862 qpair failed and we were unable to recover it. 00:34:14.862 [2024-07-26 01:16:45.123763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.862 [2024-07-26 01:16:45.123790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.862 qpair failed and we were unable to recover it. 00:34:14.862 [2024-07-26 01:16:45.123928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.862 [2024-07-26 01:16:45.123955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.862 qpair failed and we were unable to recover it. 00:34:14.862 [2024-07-26 01:16:45.124050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.862 [2024-07-26 01:16:45.124088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.862 qpair failed and we were unable to recover it. 00:34:14.862 [2024-07-26 01:16:45.124220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.862 [2024-07-26 01:16:45.124247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.862 qpair failed and we were unable to recover it. 00:34:14.862 [2024-07-26 01:16:45.124384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.862 [2024-07-26 01:16:45.124411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.862 qpair failed and we were unable to recover it. 00:34:14.862 [2024-07-26 01:16:45.124524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.862 [2024-07-26 01:16:45.124552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.862 qpair failed and we were unable to recover it. 00:34:14.862 [2024-07-26 01:16:45.124700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.862 [2024-07-26 01:16:45.124730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.862 qpair failed and we were unable to recover it. 00:34:14.862 [2024-07-26 01:16:45.124846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.862 [2024-07-26 01:16:45.124874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.862 qpair failed and we were unable to recover it. 00:34:14.862 [2024-07-26 01:16:45.125034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.862 [2024-07-26 01:16:45.125066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.862 qpair failed and we were unable to recover it. 00:34:14.862 [2024-07-26 01:16:45.125198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.862 [2024-07-26 01:16:45.125226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.862 qpair failed and we were unable to recover it. 00:34:14.862 [2024-07-26 01:16:45.125368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.862 [2024-07-26 01:16:45.125395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.862 qpair failed and we were unable to recover it. 00:34:14.862 [2024-07-26 01:16:45.125500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.862 [2024-07-26 01:16:45.125526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.862 qpair failed and we were unable to recover it. 00:34:14.862 [2024-07-26 01:16:45.125645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.862 [2024-07-26 01:16:45.125672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.862 qpair failed and we were unable to recover it. 00:34:14.862 [2024-07-26 01:16:45.125800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.862 [2024-07-26 01:16:45.125827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.862 qpair failed and we were unable to recover it. 00:34:14.862 [2024-07-26 01:16:45.125959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.862 [2024-07-26 01:16:45.125986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.862 qpair failed and we were unable to recover it. 00:34:14.862 [2024-07-26 01:16:45.126117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.862 [2024-07-26 01:16:45.126144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.862 qpair failed and we were unable to recover it. 00:34:14.862 [2024-07-26 01:16:45.126282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.862 [2024-07-26 01:16:45.126309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.862 qpair failed and we were unable to recover it. 00:34:14.862 [2024-07-26 01:16:45.126419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.862 [2024-07-26 01:16:45.126447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.862 qpair failed and we were unable to recover it. 00:34:14.862 [2024-07-26 01:16:45.126588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.862 [2024-07-26 01:16:45.126615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.862 qpair failed and we were unable to recover it. 00:34:14.862 [2024-07-26 01:16:45.126725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.862 [2024-07-26 01:16:45.126752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.862 qpair failed and we were unable to recover it. 00:34:14.862 [2024-07-26 01:16:45.126884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.862 [2024-07-26 01:16:45.126911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.862 qpair failed and we were unable to recover it. 00:34:14.862 [2024-07-26 01:16:45.127046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.862 [2024-07-26 01:16:45.127081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.862 qpair failed and we were unable to recover it. 00:34:14.862 [2024-07-26 01:16:45.127182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.862 [2024-07-26 01:16:45.127209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.862 qpair failed and we were unable to recover it. 00:34:14.862 [2024-07-26 01:16:45.127315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.862 [2024-07-26 01:16:45.127341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.862 qpair failed and we were unable to recover it. 00:34:14.862 [2024-07-26 01:16:45.127474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.862 [2024-07-26 01:16:45.127502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.862 qpair failed and we were unable to recover it. 00:34:14.862 [2024-07-26 01:16:45.127664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.862 [2024-07-26 01:16:45.127691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.862 qpair failed and we were unable to recover it. 00:34:14.862 [2024-07-26 01:16:45.127818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.862 [2024-07-26 01:16:45.127846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.862 qpair failed and we were unable to recover it. 00:34:14.862 [2024-07-26 01:16:45.127953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.862 [2024-07-26 01:16:45.127982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.862 qpair failed and we were unable to recover it. 00:34:14.862 [2024-07-26 01:16:45.128117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.862 [2024-07-26 01:16:45.128145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.862 qpair failed and we were unable to recover it. 00:34:14.862 [2024-07-26 01:16:45.128277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.862 [2024-07-26 01:16:45.128304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.862 qpair failed and we were unable to recover it. 00:34:14.862 [2024-07-26 01:16:45.128411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.863 [2024-07-26 01:16:45.128439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.863 qpair failed and we were unable to recover it. 00:34:14.863 [2024-07-26 01:16:45.128564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.863 [2024-07-26 01:16:45.128591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.863 qpair failed and we were unable to recover it. 00:34:14.863 [2024-07-26 01:16:45.128698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.863 [2024-07-26 01:16:45.128725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.863 qpair failed and we were unable to recover it. 00:34:14.863 [2024-07-26 01:16:45.128843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.863 [2024-07-26 01:16:45.128870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.863 qpair failed and we were unable to recover it. 00:34:14.863 [2024-07-26 01:16:45.128984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.863 [2024-07-26 01:16:45.129011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.863 qpair failed and we were unable to recover it. 00:34:14.863 [2024-07-26 01:16:45.129151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.863 [2024-07-26 01:16:45.129178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.863 qpair failed and we were unable to recover it. 00:34:14.863 [2024-07-26 01:16:45.129279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.863 [2024-07-26 01:16:45.129306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.863 qpair failed and we were unable to recover it. 00:34:14.863 [2024-07-26 01:16:45.129416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.863 [2024-07-26 01:16:45.129442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.863 qpair failed and we were unable to recover it. 00:34:14.863 [2024-07-26 01:16:45.129554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.863 [2024-07-26 01:16:45.129581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.863 qpair failed and we were unable to recover it. 00:34:14.863 [2024-07-26 01:16:45.129736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.863 [2024-07-26 01:16:45.129778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.863 qpair failed and we were unable to recover it. 00:34:14.863 [2024-07-26 01:16:45.129885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.863 [2024-07-26 01:16:45.129914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.863 qpair failed and we were unable to recover it. 00:34:14.863 [2024-07-26 01:16:45.130028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.863 [2024-07-26 01:16:45.130056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.863 qpair failed and we were unable to recover it. 00:34:14.863 [2024-07-26 01:16:45.130194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.863 [2024-07-26 01:16:45.130221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.863 qpair failed and we were unable to recover it. 00:34:14.863 [2024-07-26 01:16:45.130328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.863 [2024-07-26 01:16:45.130354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.863 qpair failed and we were unable to recover it. 00:34:14.863 [2024-07-26 01:16:45.130490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.863 [2024-07-26 01:16:45.130533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.863 qpair failed and we were unable to recover it. 00:34:14.863 [2024-07-26 01:16:45.130660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.863 [2024-07-26 01:16:45.130688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.863 qpair failed and we were unable to recover it. 00:34:14.863 [2024-07-26 01:16:45.130800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.863 [2024-07-26 01:16:45.130828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.863 qpair failed and we were unable to recover it. 00:34:14.863 [2024-07-26 01:16:45.130990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.863 [2024-07-26 01:16:45.131017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.863 qpair failed and we were unable to recover it. 00:34:14.863 [2024-07-26 01:16:45.131143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.863 [2024-07-26 01:16:45.131170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.863 qpair failed and we were unable to recover it. 00:34:14.863 [2024-07-26 01:16:45.131277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.863 [2024-07-26 01:16:45.131303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.863 qpair failed and we were unable to recover it. 00:34:14.863 [2024-07-26 01:16:45.131412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.863 [2024-07-26 01:16:45.131438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.863 qpair failed and we were unable to recover it. 00:34:14.863 [2024-07-26 01:16:45.131542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.863 [2024-07-26 01:16:45.131568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.863 qpair failed and we were unable to recover it. 00:34:14.863 [2024-07-26 01:16:45.131706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.863 [2024-07-26 01:16:45.131733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.863 qpair failed and we were unable to recover it. 00:34:14.863 [2024-07-26 01:16:45.131846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.863 [2024-07-26 01:16:45.131876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.863 qpair failed and we were unable to recover it. 00:34:14.863 [2024-07-26 01:16:45.131987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.863 [2024-07-26 01:16:45.132014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.863 qpair failed and we were unable to recover it. 00:34:14.863 [2024-07-26 01:16:45.132142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.863 [2024-07-26 01:16:45.132169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.863 qpair failed and we were unable to recover it. 00:34:14.863 [2024-07-26 01:16:45.132301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.863 [2024-07-26 01:16:45.132329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.863 qpair failed and we were unable to recover it. 00:34:14.863 [2024-07-26 01:16:45.132466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.863 [2024-07-26 01:16:45.132493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.863 qpair failed and we were unable to recover it. 00:34:14.863 [2024-07-26 01:16:45.132603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.863 [2024-07-26 01:16:45.132631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.863 qpair failed and we were unable to recover it. 00:34:14.863 [2024-07-26 01:16:45.132733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.863 [2024-07-26 01:16:45.132760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.863 qpair failed and we were unable to recover it. 00:34:14.863 [2024-07-26 01:16:45.132863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.863 [2024-07-26 01:16:45.132890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.863 qpair failed and we were unable to recover it. 00:34:14.863 [2024-07-26 01:16:45.132989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.863 [2024-07-26 01:16:45.133016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.863 qpair failed and we were unable to recover it. 00:34:14.863 [2024-07-26 01:16:45.133150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.863 [2024-07-26 01:16:45.133178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.863 qpair failed and we were unable to recover it. 00:34:14.863 [2024-07-26 01:16:45.133284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.863 [2024-07-26 01:16:45.133312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.863 qpair failed and we were unable to recover it. 00:34:14.863 [2024-07-26 01:16:45.133419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.863 [2024-07-26 01:16:45.133445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.863 qpair failed and we were unable to recover it. 00:34:14.863 [2024-07-26 01:16:45.133576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.863 [2024-07-26 01:16:45.133603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.863 qpair failed and we were unable to recover it. 00:34:14.863 [2024-07-26 01:16:45.133737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.863 [2024-07-26 01:16:45.133764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.863 qpair failed and we were unable to recover it. 00:34:14.863 [2024-07-26 01:16:45.133872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.864 [2024-07-26 01:16:45.133898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.864 qpair failed and we were unable to recover it. 00:34:14.864 [2024-07-26 01:16:45.134011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.864 [2024-07-26 01:16:45.134040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.864 qpair failed and we were unable to recover it. 00:34:14.864 [2024-07-26 01:16:45.134153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.864 [2024-07-26 01:16:45.134180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.864 qpair failed and we were unable to recover it. 00:34:14.864 [2024-07-26 01:16:45.134279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.864 [2024-07-26 01:16:45.134306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.864 qpair failed and we were unable to recover it. 00:34:14.864 [2024-07-26 01:16:45.134446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.864 [2024-07-26 01:16:45.134474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.864 qpair failed and we were unable to recover it. 00:34:14.864 [2024-07-26 01:16:45.134583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.864 [2024-07-26 01:16:45.134610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.864 qpair failed and we were unable to recover it. 00:34:14.864 [2024-07-26 01:16:45.134726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.864 [2024-07-26 01:16:45.134753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.864 qpair failed and we were unable to recover it. 00:34:14.864 [2024-07-26 01:16:45.134862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.864 [2024-07-26 01:16:45.134889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.864 qpair failed and we were unable to recover it. 00:34:14.864 [2024-07-26 01:16:45.134990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.864 [2024-07-26 01:16:45.135018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.864 qpair failed and we were unable to recover it. 00:34:14.864 [2024-07-26 01:16:45.135136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.864 [2024-07-26 01:16:45.135164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.864 qpair failed and we were unable to recover it. 00:34:14.864 [2024-07-26 01:16:45.135277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.864 [2024-07-26 01:16:45.135305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.864 qpair failed and we were unable to recover it. 00:34:14.864 [2024-07-26 01:16:45.135408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.864 [2024-07-26 01:16:45.135435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.864 qpair failed and we were unable to recover it. 00:34:14.864 [2024-07-26 01:16:45.135543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.864 [2024-07-26 01:16:45.135570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.864 qpair failed and we were unable to recover it. 00:34:14.864 [2024-07-26 01:16:45.135704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.864 [2024-07-26 01:16:45.135731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.864 qpair failed and we were unable to recover it. 00:34:14.864 [2024-07-26 01:16:45.135830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.864 [2024-07-26 01:16:45.135857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.864 qpair failed and we were unable to recover it. 00:34:14.864 [2024-07-26 01:16:45.135966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.864 [2024-07-26 01:16:45.135993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.864 qpair failed and we were unable to recover it. 00:34:14.864 [2024-07-26 01:16:45.136126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.864 [2024-07-26 01:16:45.136154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.864 qpair failed and we were unable to recover it. 00:34:14.864 [2024-07-26 01:16:45.136269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.864 [2024-07-26 01:16:45.136296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.864 qpair failed and we were unable to recover it. 00:34:14.864 [2024-07-26 01:16:45.136414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.864 [2024-07-26 01:16:45.136442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.864 qpair failed and we were unable to recover it. 00:34:14.864 [2024-07-26 01:16:45.136556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.864 [2024-07-26 01:16:45.136583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.864 qpair failed and we were unable to recover it. 00:34:14.864 [2024-07-26 01:16:45.136683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.864 [2024-07-26 01:16:45.136711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.864 qpair failed and we were unable to recover it. 00:34:14.864 [2024-07-26 01:16:45.136813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.864 [2024-07-26 01:16:45.136839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.864 qpair failed and we were unable to recover it. 00:34:14.864 [2024-07-26 01:16:45.136957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.864 [2024-07-26 01:16:45.136984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.864 qpair failed and we were unable to recover it. 00:34:14.864 [2024-07-26 01:16:45.137118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.864 [2024-07-26 01:16:45.137146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.864 qpair failed and we were unable to recover it. 00:34:14.864 [2024-07-26 01:16:45.137280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.864 [2024-07-26 01:16:45.137307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.864 qpair failed and we were unable to recover it. 00:34:14.864 [2024-07-26 01:16:45.137432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.864 [2024-07-26 01:16:45.137459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.864 qpair failed and we were unable to recover it. 00:34:14.864 [2024-07-26 01:16:45.137570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.864 [2024-07-26 01:16:45.137597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.864 qpair failed and we were unable to recover it. 00:34:14.864 [2024-07-26 01:16:45.137696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.864 [2024-07-26 01:16:45.137723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.864 qpair failed and we were unable to recover it. 00:34:14.864 [2024-07-26 01:16:45.137842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.864 [2024-07-26 01:16:45.137869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.864 qpair failed and we were unable to recover it. 00:34:14.864 [2024-07-26 01:16:45.137989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.864 [2024-07-26 01:16:45.138031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.864 qpair failed and we were unable to recover it. 00:34:14.864 [2024-07-26 01:16:45.138159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.864 [2024-07-26 01:16:45.138188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.864 qpair failed and we were unable to recover it. 00:34:14.864 [2024-07-26 01:16:45.138352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.864 [2024-07-26 01:16:45.138385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.138523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.138550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.138653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.138680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.138805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.138847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.138964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.138991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.139097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.139126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.139261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.139288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.139432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.139459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.139563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.139590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.139694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.139721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.139861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.139892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.140009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.140037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.140154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.140183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.140322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.140350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.140463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.140491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.140600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.140627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.140760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.140787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.140895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.140922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.141031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.141065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.141176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.141202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.141339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.141368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.141528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.141556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.141698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.141725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.141886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.141913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.142026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.142053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.142166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.142192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.142324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.142352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.142490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.142517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.142624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.142651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.142761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.142790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.142923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.142950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.143124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.143153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.143261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.143287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.143395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.143422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.143560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.143587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.143718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.143746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.143856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.143884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.144022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.144049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.144181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.144210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.144313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.144341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.144448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.144480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.144616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.144644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.144781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.144809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.144921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.144949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.145096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.145125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.145241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.145269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.145397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.145425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.145557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.145584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.145728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.145755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.145863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.145891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.146023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.146051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.146172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.146200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.146351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.146379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.146487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.146514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.146624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.146652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.146788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.146816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.146931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.146958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.147071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.147099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.147206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.147234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.147333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.147360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.147498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.147526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.147659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.147687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.147793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.147821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.147954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.147982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.148144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.148172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.148302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.148329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.148432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.148460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.148584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.148626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.148772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.148802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.865 qpair failed and we were unable to recover it. 00:34:14.865 [2024-07-26 01:16:45.148904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.865 [2024-07-26 01:16:45.148932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.149035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.149070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.149184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.149212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.149312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.149339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.149451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.149479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.149587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.149614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.149772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.149800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.149972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.149999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.150116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.150144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.150255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.150282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.150390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.150418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.150558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.150589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.150699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.150727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.150839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.150868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.150977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.151004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.151114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.151142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.151242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.151270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.151435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.151461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.151578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.151606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.151740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.151769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.151883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.151911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.152017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.152045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.152216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.152245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.152357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.152386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.152529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.152557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.152661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.152689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.152828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.152856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.152986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.153014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.153137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.153168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.153282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.153309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.153418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.153452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.153571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.153598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.153702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.153730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.153866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.153893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.154009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.154036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.154147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.154175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.154285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.154312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.154442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.154469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.154605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.154648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.154761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.154792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.154937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.154979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.155117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.155147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.155257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.155286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.155395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.155423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.155540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.155569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.155684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.155712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.155820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.155848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.155983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.156011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.156164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.156207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.156318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.156347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.156480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.156507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.156659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.156687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.156804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.156832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.156967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.156994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.157102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.157130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.157232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.157259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.157365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.157392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.157503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.157530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.157630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.157656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.157753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.157780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.157878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.157904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.158038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.158073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.158206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.158234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.158344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.158372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.158480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.158507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.158624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.158655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.158777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.158804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.158945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.158971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.159077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.159106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.866 qpair failed and we were unable to recover it. 00:34:14.866 [2024-07-26 01:16:45.159214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.866 [2024-07-26 01:16:45.159241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.159356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.159384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.159497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.159525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.159631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.159658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.159817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.159845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.159954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.159981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.160089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.160117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.160214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.160241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.160379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.160405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.160544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.160571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.160715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.160742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.160849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.160876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.160979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.161005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.161117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.161144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.161279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.161306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.161411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.161438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.161542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.161569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.161671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.161698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.161800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.161826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.161935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.161962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.162075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.162103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.162206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.162232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.162376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.162403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.162512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.162543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.162670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.162697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.162805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.162833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.162963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.162990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.163124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.163151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.163313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.163340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.163439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.163466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.163563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.163589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.163725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.163751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.163852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.163879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.163987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.164013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.164143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.164185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.164317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.164358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.164489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.164529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.164652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.164680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.164790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.164818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.164947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.164974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.165080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.165109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.165223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.165249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.165397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.165424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.165541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.165567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.165727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.165754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.165853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.165880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.165989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.166015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.166175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.166216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.166360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.166391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.166532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.166561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.166677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.166711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.166842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.166870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.166988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.167029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.167148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.167176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.167288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.167316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.167419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.167446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.167580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.167608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.167717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.167745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.167848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.167875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.168012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.168039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.168155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.168182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.168282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.168309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.168442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.168468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.168568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.168595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.168705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.867 [2024-07-26 01:16:45.168732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.867 qpair failed and we were unable to recover it. 00:34:14.867 [2024-07-26 01:16:45.168843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.168870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.168998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.169025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.169136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.169164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.169270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.169297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.169463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.169490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.169591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.169618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.169721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.169747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.169877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.169919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.170070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.170099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.170239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.170268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.170402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.170429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.170568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.170596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.170733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.170765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.170873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.170902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.171049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.171098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.171209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.171239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.171380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.171407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.171522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.171550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.171656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.171683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.171789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.171817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.171951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.171979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.172135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.172175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.172296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.172324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.172434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.172461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.172629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.172656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.172791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.172818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.172935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.172963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.173077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.173107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.173209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.173236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.173348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.173375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.173484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.173512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.173642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.173670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.173805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.173833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.173929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.173956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.174101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.174143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.174286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.174315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.174452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.174480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.174591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.174619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.174759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.174788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.174901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.174929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.175042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.175079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.175224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.175252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.175355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.175382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.175483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.175510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.175644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.175671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.175808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.175837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.175975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.176004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.176135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.176175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.176290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.176318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.176424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.176452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.176580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.176608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.176718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.176747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.176846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.176881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.177023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.177051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.177173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.177200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.177336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.177364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.177498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.177525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.177628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.177655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.177765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.177792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.177926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.177956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.178091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.178119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.178225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.178254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.178356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.178383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.178515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.178543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.178645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.178672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.178782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.178808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.178923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.178950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.179055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.179093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.179209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.179236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.868 [2024-07-26 01:16:45.179344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.868 [2024-07-26 01:16:45.179370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.868 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.179508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.179535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.179649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.179676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.179771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.179799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.179904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.179931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.180068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.180096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.180193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.180221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.180327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.180355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.180492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.180521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.180689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.180716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.180862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.180904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.181049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.181087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.181227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.181255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.181361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.181389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.181490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.181518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.181649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.181676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.181777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.181807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.181938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.181966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.182083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.182111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.182222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.182251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.182360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.182387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.182491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.182519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.182625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.182653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.182830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.182876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.183011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.183040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.183149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.183177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.183316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.183343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.183449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.183476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.183612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.183641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.183775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.183802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.183908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.183936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.184045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.184083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.184253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.184281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.184405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.184432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.184545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.184572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.184682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.184710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.184808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.184835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.184958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.184985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.185093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.185122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.185255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.185282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.185425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.185452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.185560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.185588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.185719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.185746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.185891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.185919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.186072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.186113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.186263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.186293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.186465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.186493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.186596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.186624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.186728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.186755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.186869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.186897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.187009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.187038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.187192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.187232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.187357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.187386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.187525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.187552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.187667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.187695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.187804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.187832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.187933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.187960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.188075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.188103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.188223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.188250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.188360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.188387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.188521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.869 [2024-07-26 01:16:45.188547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.869 qpair failed and we were unable to recover it. 00:34:14.869 [2024-07-26 01:16:45.188681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.188708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.188816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.188846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.189001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.189043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.189172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.189202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.189345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.189372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.189484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.189511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.189622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.189652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.189755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.189783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.189932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.189974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.190124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.190163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.190283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.190311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.190412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.190439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.190576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.190603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.190702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.190729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.190834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.190860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.190972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.191002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.191160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.191189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.191301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.191338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.191478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.191504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.191616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.191642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.191741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.191768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.191906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.191932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.192065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.192117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.192243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.192284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.192426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.192460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.192563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.192591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.192730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.192759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.192867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.192895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.193057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.193091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.193197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.193230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.193345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.193372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.193505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.193533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.193639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.193667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.193805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.193832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.193996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.194024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.194157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.194185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.194325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.194352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.194491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.194519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.194642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.194671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.194791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.194819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.194961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.194989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.195128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.195156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.195263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.195290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.195399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.195426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.195525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.195552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.195683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.195710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.195820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.195847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.195977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.196003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.196129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.196169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.196283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.196311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.196423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.196450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.196591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.196617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.196718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.196745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.196875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.196901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.197007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.197034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.197146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.197173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.197279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.197306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.197435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.197462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.197598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.197625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.197737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.197764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.197873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.197900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.198016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.198067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.198185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.198212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.198319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.198346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.198482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.198508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.198645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.198670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.198812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.198840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.198944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.198971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.870 [2024-07-26 01:16:45.199091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.870 [2024-07-26 01:16:45.199119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.870 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.199218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.199245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.199380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.199406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.199520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.199547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.199685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.199713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.199821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.199849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.199989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.200016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.200142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.200169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.200275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.200303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.200429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.200456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.200595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.200623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.200728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.200756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.200856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.200883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.200992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.201020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.201133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.201160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.201269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.201296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.201406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.201432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.201533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.201559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.201692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.201718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.201834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.201874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.201992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.202022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.202142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.202170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.202287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.202315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.202450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.202478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.202577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.202604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.202706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.202733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.202842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.202869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.203007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.203033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.203153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.203179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.203296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.203326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.203437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.203463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.203605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.203631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.203763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.203789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.203902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.203928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.204042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.204084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.204207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.204236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.204335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.204362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.204469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.204496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.204634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.204661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.204796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.204823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.204934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.204962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.205077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.205105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.205220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.205249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.205383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.205410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.205536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.205564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.205705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.205733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.205857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.205884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.206021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.206071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.206196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.206223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.206355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.206382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.206522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.206548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.206645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.206671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.206779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.206805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.206914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.206940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.207079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.207107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.207204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.207235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.207346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.207374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.207510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.207535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.207646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.207672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.207775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.207801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.207897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.207924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.208055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.208087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.208195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.208221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.208322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.208348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.208487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.208513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.208635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.208662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.208770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.871 [2024-07-26 01:16:45.208796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.871 qpair failed and we were unable to recover it. 00:34:14.871 [2024-07-26 01:16:45.208932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.208958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.209120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.209147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.209278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.209304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.209402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.209429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.209534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.209560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.209694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.209720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 01:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:14.872 [2024-07-26 01:16:45.209829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.209855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 01:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:34:14.872 [2024-07-26 01:16:45.209985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.210011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 01:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:14.872 [2024-07-26 01:16:45.210134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.210161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 01:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:14.872 [2024-07-26 01:16:45.210274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.210302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 01:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:14.872 [2024-07-26 01:16:45.210444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.210470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.210586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.210611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.210722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.210747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.210853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.210883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.210993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.211019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.211157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.211184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.211289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.211316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.211413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.211439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.211543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.211570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.211681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.211708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.211844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.211870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.212002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.212041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.212166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.212195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.212338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.212366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.212471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.212497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.212612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.212639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.212773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.212799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.212904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.212931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.213071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.213097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.213230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.213257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.213393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.213420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.213528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.213556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.213672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.213698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.213801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.213827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.213975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.214016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.214140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.214171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.214286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.214315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.214450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.214478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.214586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.214614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.214726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.214753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.214862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.214895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.215004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.215033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.215180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.215207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.215321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.215348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.215463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.215491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.215601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.215629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.215745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.215772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.215900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.215927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.216068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.216097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.216210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.216238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.216349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.216377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.216508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.216535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.216647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.216674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.216837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.216864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.217010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.217039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.217160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.217187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.217296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.217323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:14.872 [2024-07-26 01:16:45.217441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.872 [2024-07-26 01:16:45.217467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:14.872 qpair failed and we were unable to recover it. 00:34:15.139 [2024-07-26 01:16:45.217572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.139 [2024-07-26 01:16:45.217600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.139 qpair failed and we were unable to recover it. 00:34:15.139 [2024-07-26 01:16:45.217714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.139 [2024-07-26 01:16:45.217742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.139 qpair failed and we were unable to recover it. 00:34:15.139 [2024-07-26 01:16:45.217855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.139 [2024-07-26 01:16:45.217883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.139 qpair failed and we were unable to recover it. 00:34:15.139 [2024-07-26 01:16:45.217996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.139 [2024-07-26 01:16:45.218025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.139 qpair failed and we were unable to recover it. 00:34:15.139 [2024-07-26 01:16:45.218146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.139 [2024-07-26 01:16:45.218174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.139 qpair failed and we were unable to recover it. 00:34:15.139 [2024-07-26 01:16:45.218302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.139 [2024-07-26 01:16:45.218329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.139 qpair failed and we were unable to recover it. 00:34:15.139 [2024-07-26 01:16:45.218464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.139 [2024-07-26 01:16:45.218491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.139 qpair failed and we were unable to recover it. 00:34:15.139 [2024-07-26 01:16:45.218594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.139 [2024-07-26 01:16:45.218622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.139 qpair failed and we were unable to recover it. 00:34:15.139 [2024-07-26 01:16:45.218770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.139 [2024-07-26 01:16:45.218797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.139 qpair failed and we were unable to recover it. 00:34:15.139 [2024-07-26 01:16:45.218920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.139 [2024-07-26 01:16:45.218959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.139 qpair failed and we were unable to recover it. 00:34:15.139 [2024-07-26 01:16:45.219081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.139 [2024-07-26 01:16:45.219109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.139 qpair failed and we were unable to recover it. 00:34:15.139 [2024-07-26 01:16:45.219245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.139 [2024-07-26 01:16:45.219271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.139 qpair failed and we were unable to recover it. 00:34:15.139 [2024-07-26 01:16:45.219376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.139 [2024-07-26 01:16:45.219404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.139 qpair failed and we were unable to recover it. 00:34:15.139 [2024-07-26 01:16:45.219505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.139 [2024-07-26 01:16:45.219531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.139 qpair failed and we were unable to recover it. 00:34:15.139 [2024-07-26 01:16:45.219632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.139 [2024-07-26 01:16:45.219659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.139 qpair failed and we were unable to recover it. 00:34:15.139 [2024-07-26 01:16:45.219772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.139 [2024-07-26 01:16:45.219798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.139 qpair failed and we were unable to recover it. 00:34:15.139 [2024-07-26 01:16:45.219909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.139 [2024-07-26 01:16:45.219936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.139 qpair failed and we were unable to recover it. 00:34:15.139 [2024-07-26 01:16:45.220070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.139 [2024-07-26 01:16:45.220098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.139 qpair failed and we were unable to recover it. 00:34:15.139 [2024-07-26 01:16:45.220204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.139 [2024-07-26 01:16:45.220230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.139 qpair failed and we were unable to recover it. 00:34:15.139 [2024-07-26 01:16:45.220369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.139 [2024-07-26 01:16:45.220395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.139 qpair failed and we were unable to recover it. 00:34:15.139 [2024-07-26 01:16:45.220507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.139 [2024-07-26 01:16:45.220533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.139 qpair failed and we were unable to recover it. 00:34:15.139 [2024-07-26 01:16:45.220643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.139 [2024-07-26 01:16:45.220670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.139 qpair failed and we were unable to recover it. 00:34:15.139 [2024-07-26 01:16:45.220798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.139 [2024-07-26 01:16:45.220824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.139 qpair failed and we were unable to recover it. 00:34:15.139 [2024-07-26 01:16:45.220933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.139 [2024-07-26 01:16:45.220959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.139 qpair failed and we were unable to recover it. 00:34:15.139 [2024-07-26 01:16:45.221093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.139 [2024-07-26 01:16:45.221120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.139 qpair failed and we were unable to recover it. 00:34:15.139 [2024-07-26 01:16:45.221255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.139 [2024-07-26 01:16:45.221282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.139 qpair failed and we were unable to recover it. 00:34:15.139 [2024-07-26 01:16:45.221407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.139 [2024-07-26 01:16:45.221448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.139 qpair failed and we were unable to recover it. 00:34:15.139 [2024-07-26 01:16:45.221593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.139 [2024-07-26 01:16:45.221621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.139 qpair failed and we were unable to recover it. 00:34:15.139 [2024-07-26 01:16:45.221759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.139 [2024-07-26 01:16:45.221786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.139 qpair failed and we were unable to recover it. 00:34:15.139 [2024-07-26 01:16:45.221891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.139 [2024-07-26 01:16:45.221917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.139 qpair failed and we were unable to recover it. 00:34:15.140 [2024-07-26 01:16:45.222030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.140 [2024-07-26 01:16:45.222055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.140 qpair failed and we were unable to recover it. 00:34:15.140 [2024-07-26 01:16:45.222179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.140 [2024-07-26 01:16:45.222207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.140 qpair failed and we were unable to recover it. 00:34:15.140 [2024-07-26 01:16:45.222350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.140 [2024-07-26 01:16:45.222376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.140 qpair failed and we were unable to recover it. 00:34:15.140 [2024-07-26 01:16:45.222485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.140 [2024-07-26 01:16:45.222512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.140 qpair failed and we were unable to recover it. 00:34:15.140 [2024-07-26 01:16:45.222614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.140 [2024-07-26 01:16:45.222642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.140 qpair failed and we were unable to recover it. 00:34:15.140 [2024-07-26 01:16:45.222752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.140 [2024-07-26 01:16:45.222779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.140 qpair failed and we were unable to recover it. 00:34:15.140 [2024-07-26 01:16:45.222923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.140 [2024-07-26 01:16:45.222964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.140 qpair failed and we were unable to recover it. 00:34:15.140 [2024-07-26 01:16:45.223081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.140 [2024-07-26 01:16:45.223111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.140 qpair failed and we were unable to recover it. 00:34:15.140 [2024-07-26 01:16:45.223254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.140 [2024-07-26 01:16:45.223282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.140 qpair failed and we were unable to recover it. 00:34:15.140 [2024-07-26 01:16:45.223421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.140 [2024-07-26 01:16:45.223449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.140 qpair failed and we were unable to recover it. 00:34:15.140 [2024-07-26 01:16:45.223564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.140 [2024-07-26 01:16:45.223591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.140 qpair failed and we were unable to recover it. 00:34:15.140 [2024-07-26 01:16:45.223734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.140 [2024-07-26 01:16:45.223760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.140 qpair failed and we were unable to recover it. 00:34:15.140 [2024-07-26 01:16:45.223866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.140 [2024-07-26 01:16:45.223895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.140 qpair failed and we were unable to recover it. 00:34:15.140 [2024-07-26 01:16:45.224003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.140 [2024-07-26 01:16:45.224029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.140 qpair failed and we were unable to recover it. 00:34:15.140 [2024-07-26 01:16:45.224138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.140 [2024-07-26 01:16:45.224164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.140 qpair failed and we were unable to recover it. 00:34:15.140 [2024-07-26 01:16:45.224273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.140 [2024-07-26 01:16:45.224298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.140 qpair failed and we were unable to recover it. 00:34:15.140 [2024-07-26 01:16:45.224408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.140 [2024-07-26 01:16:45.224434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.140 qpair failed and we were unable to recover it. 00:34:15.140 [2024-07-26 01:16:45.224534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.140 [2024-07-26 01:16:45.224559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.140 qpair failed and we were unable to recover it. 00:34:15.140 [2024-07-26 01:16:45.224663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.140 [2024-07-26 01:16:45.224689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.140 qpair failed and we were unable to recover it. 00:34:15.140 [2024-07-26 01:16:45.224789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.140 [2024-07-26 01:16:45.224816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.140 qpair failed and we were unable to recover it. 00:34:15.140 [2024-07-26 01:16:45.224936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.140 [2024-07-26 01:16:45.224976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.140 qpair failed and we were unable to recover it. 00:34:15.140 [2024-07-26 01:16:45.225094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.140 [2024-07-26 01:16:45.225123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.140 qpair failed and we were unable to recover it. 00:34:15.140 [2024-07-26 01:16:45.225264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.140 [2024-07-26 01:16:45.225290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.140 qpair failed and we were unable to recover it. 00:34:15.140 [2024-07-26 01:16:45.225396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.140 [2024-07-26 01:16:45.225422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.140 qpair failed and we were unable to recover it. 00:34:15.140 [2024-07-26 01:16:45.225532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.140 [2024-07-26 01:16:45.225559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.140 qpair failed and we were unable to recover it. 00:34:15.140 [2024-07-26 01:16:45.225657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.140 [2024-07-26 01:16:45.225683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.140 qpair failed and we were unable to recover it. 00:34:15.140 [2024-07-26 01:16:45.225783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.140 [2024-07-26 01:16:45.225808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.140 qpair failed and we were unable to recover it. 00:34:15.140 [2024-07-26 01:16:45.225906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.140 [2024-07-26 01:16:45.225932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.140 qpair failed and we were unable to recover it. 00:34:15.140 [2024-07-26 01:16:45.226044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.140 [2024-07-26 01:16:45.226080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.140 qpair failed and we were unable to recover it. 00:34:15.140 [2024-07-26 01:16:45.226216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.140 [2024-07-26 01:16:45.226243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.140 qpair failed and we were unable to recover it. 00:34:15.140 [2024-07-26 01:16:45.226375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.140 [2024-07-26 01:16:45.226401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.140 qpair failed and we were unable to recover it. 00:34:15.140 [2024-07-26 01:16:45.226516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.140 [2024-07-26 01:16:45.226543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.140 qpair failed and we were unable to recover it. 00:34:15.140 [2024-07-26 01:16:45.226642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.140 [2024-07-26 01:16:45.226668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.140 qpair failed and we were unable to recover it. 00:34:15.140 [2024-07-26 01:16:45.226811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.140 [2024-07-26 01:16:45.226841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.140 qpair failed and we were unable to recover it. 00:34:15.140 [2024-07-26 01:16:45.226976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.140 [2024-07-26 01:16:45.227003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.140 qpair failed and we were unable to recover it. 00:34:15.140 [2024-07-26 01:16:45.227137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.140 [2024-07-26 01:16:45.227165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.140 qpair failed and we were unable to recover it. 00:34:15.140 [2024-07-26 01:16:45.227275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.141 [2024-07-26 01:16:45.227304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.141 qpair failed and we were unable to recover it. 00:34:15.141 [2024-07-26 01:16:45.227409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.141 [2024-07-26 01:16:45.227436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.141 qpair failed and we were unable to recover it. 00:34:15.141 [2024-07-26 01:16:45.227573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.141 [2024-07-26 01:16:45.227600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.141 qpair failed and we were unable to recover it. 00:34:15.141 [2024-07-26 01:16:45.227729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.141 [2024-07-26 01:16:45.227756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.141 qpair failed and we were unable to recover it. 00:34:15.141 [2024-07-26 01:16:45.227892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.141 [2024-07-26 01:16:45.227918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.141 qpair failed and we were unable to recover it. 00:34:15.141 [2024-07-26 01:16:45.228037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.141 [2024-07-26 01:16:45.228084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.141 qpair failed and we were unable to recover it. 00:34:15.141 [2024-07-26 01:16:45.228201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.141 [2024-07-26 01:16:45.228228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.141 qpair failed and we were unable to recover it. 00:34:15.141 [2024-07-26 01:16:45.228368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.141 [2024-07-26 01:16:45.228395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.141 qpair failed and we were unable to recover it. 00:34:15.141 01:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:15.141 [2024-07-26 01:16:45.228535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.141 [2024-07-26 01:16:45.228562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.141 qpair failed and we were unable to recover it. 00:34:15.141 01:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:15.141 [2024-07-26 01:16:45.228701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.141 [2024-07-26 01:16:45.228733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.141 qpair failed and we were unable to recover it. 00:34:15.141 01:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.141 [2024-07-26 01:16:45.228839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.141 [2024-07-26 01:16:45.228866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.141 qpair failed and we were unable to recover it. 00:34:15.141 01:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:15.141 [2024-07-26 01:16:45.228987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.141 [2024-07-26 01:16:45.229028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.141 qpair failed and we were unable to recover it. 00:34:15.141 [2024-07-26 01:16:45.229184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.141 [2024-07-26 01:16:45.229213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.141 qpair failed and we were unable to recover it. 00:34:15.141 [2024-07-26 01:16:45.229336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.141 [2024-07-26 01:16:45.229364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.141 qpair failed and we were unable to recover it. 00:34:15.141 [2024-07-26 01:16:45.229509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.141 [2024-07-26 01:16:45.229537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.141 qpair failed and we were unable to recover it. 00:34:15.141 [2024-07-26 01:16:45.229643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.141 [2024-07-26 01:16:45.229670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.141 qpair failed and we were unable to recover it. 00:34:15.141 [2024-07-26 01:16:45.229807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.141 [2024-07-26 01:16:45.229834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.141 qpair failed and we were unable to recover it. 00:34:15.141 [2024-07-26 01:16:45.229941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.141 [2024-07-26 01:16:45.229968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.141 qpair failed and we were unable to recover it. 00:34:15.141 [2024-07-26 01:16:45.230081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.141 [2024-07-26 01:16:45.230110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.141 qpair failed and we were unable to recover it. 00:34:15.141 [2024-07-26 01:16:45.230225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.141 [2024-07-26 01:16:45.230253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.141 qpair failed and we were unable to recover it. 00:34:15.141 [2024-07-26 01:16:45.230372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.141 [2024-07-26 01:16:45.230399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.141 qpair failed and we were unable to recover it. 00:34:15.141 [2024-07-26 01:16:45.230539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.141 [2024-07-26 01:16:45.230566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.141 qpair failed and we were unable to recover it. 00:34:15.141 [2024-07-26 01:16:45.230716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.141 [2024-07-26 01:16:45.230744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.141 qpair failed and we were unable to recover it. 00:34:15.141 [2024-07-26 01:16:45.230855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.141 [2024-07-26 01:16:45.230884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.141 qpair failed and we were unable to recover it. 00:34:15.141 [2024-07-26 01:16:45.231040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.141 [2024-07-26 01:16:45.231094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.141 qpair failed and we were unable to recover it. 00:34:15.141 [2024-07-26 01:16:45.231207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.141 [2024-07-26 01:16:45.231234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.141 qpair failed and we were unable to recover it. 00:34:15.141 [2024-07-26 01:16:45.231342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.141 [2024-07-26 01:16:45.231368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.141 qpair failed and we were unable to recover it. 00:34:15.141 [2024-07-26 01:16:45.231518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.141 [2024-07-26 01:16:45.231545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.141 qpair failed and we were unable to recover it. 00:34:15.141 [2024-07-26 01:16:45.231658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.141 [2024-07-26 01:16:45.231684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.141 qpair failed and we were unable to recover it. 00:34:15.141 [2024-07-26 01:16:45.231792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.141 [2024-07-26 01:16:45.231820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.141 qpair failed and we were unable to recover it. 00:34:15.141 [2024-07-26 01:16:45.231937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.141 [2024-07-26 01:16:45.231977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.141 qpair failed and we were unable to recover it. 00:34:15.141 [2024-07-26 01:16:45.232123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.141 [2024-07-26 01:16:45.232152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.141 qpair failed and we were unable to recover it. 00:34:15.141 [2024-07-26 01:16:45.232263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.141 [2024-07-26 01:16:45.232289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.141 qpair failed and we were unable to recover it. 00:34:15.141 [2024-07-26 01:16:45.232403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.142 [2024-07-26 01:16:45.232430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.142 qpair failed and we were unable to recover it. 00:34:15.142 [2024-07-26 01:16:45.232574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.142 [2024-07-26 01:16:45.232600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.142 qpair failed and we were unable to recover it. 00:34:15.142 [2024-07-26 01:16:45.232755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.142 [2024-07-26 01:16:45.232786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.142 qpair failed and we were unable to recover it. 00:34:15.142 [2024-07-26 01:16:45.232923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.142 [2024-07-26 01:16:45.232949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.142 qpair failed and we were unable to recover it. 00:34:15.142 [2024-07-26 01:16:45.233052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.142 [2024-07-26 01:16:45.233090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.142 qpair failed and we were unable to recover it. 00:34:15.142 [2024-07-26 01:16:45.233206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.142 [2024-07-26 01:16:45.233232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.142 qpair failed and we were unable to recover it. 00:34:15.142 [2024-07-26 01:16:45.233341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.142 [2024-07-26 01:16:45.233369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.142 qpair failed and we were unable to recover it. 00:34:15.142 [2024-07-26 01:16:45.233469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.142 [2024-07-26 01:16:45.233495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.142 qpair failed and we were unable to recover it. 00:34:15.142 [2024-07-26 01:16:45.233593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.142 [2024-07-26 01:16:45.233619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.142 qpair failed and we were unable to recover it. 00:34:15.142 [2024-07-26 01:16:45.233754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.142 [2024-07-26 01:16:45.233780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.142 qpair failed and we were unable to recover it. 00:34:15.142 [2024-07-26 01:16:45.233887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.142 [2024-07-26 01:16:45.233915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.142 qpair failed and we were unable to recover it. 00:34:15.142 [2024-07-26 01:16:45.234023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.142 [2024-07-26 01:16:45.234049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.142 qpair failed and we were unable to recover it. 00:34:15.142 [2024-07-26 01:16:45.234191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.142 [2024-07-26 01:16:45.234217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.142 qpair failed and we were unable to recover it. 00:34:15.142 [2024-07-26 01:16:45.234322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.142 [2024-07-26 01:16:45.234348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.142 qpair failed and we were unable to recover it. 00:34:15.142 [2024-07-26 01:16:45.234455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.142 [2024-07-26 01:16:45.234481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.142 qpair failed and we were unable to recover it. 00:34:15.142 [2024-07-26 01:16:45.234616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.142 [2024-07-26 01:16:45.234644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.142 qpair failed and we were unable to recover it. 00:34:15.142 [2024-07-26 01:16:45.234752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.142 [2024-07-26 01:16:45.234778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.142 qpair failed and we were unable to recover it. 00:34:15.142 [2024-07-26 01:16:45.234890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.142 [2024-07-26 01:16:45.234915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.142 qpair failed and we were unable to recover it. 00:34:15.142 [2024-07-26 01:16:45.235025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.142 [2024-07-26 01:16:45.235050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.142 qpair failed and we were unable to recover it. 00:34:15.142 [2024-07-26 01:16:45.235195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.142 [2024-07-26 01:16:45.235221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.142 qpair failed and we were unable to recover it. 00:34:15.142 [2024-07-26 01:16:45.235363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.142 [2024-07-26 01:16:45.235389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.142 qpair failed and we were unable to recover it. 00:34:15.142 [2024-07-26 01:16:45.235526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.142 [2024-07-26 01:16:45.235553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.142 qpair failed and we were unable to recover it. 00:34:15.142 [2024-07-26 01:16:45.235662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.142 [2024-07-26 01:16:45.235688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.142 qpair failed and we were unable to recover it. 00:34:15.142 [2024-07-26 01:16:45.235828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.142 [2024-07-26 01:16:45.235855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.142 qpair failed and we were unable to recover it. 00:34:15.142 [2024-07-26 01:16:45.235959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.142 [2024-07-26 01:16:45.235985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.142 qpair failed and we were unable to recover it. 00:34:15.142 [2024-07-26 01:16:45.236118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.142 [2024-07-26 01:16:45.236145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.142 qpair failed and we were unable to recover it. 00:34:15.142 [2024-07-26 01:16:45.236242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.142 [2024-07-26 01:16:45.236268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.142 qpair failed and we were unable to recover it. 00:34:15.142 [2024-07-26 01:16:45.236404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.142 [2024-07-26 01:16:45.236430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.142 qpair failed and we were unable to recover it. 00:34:15.142 [2024-07-26 01:16:45.236573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.142 [2024-07-26 01:16:45.236600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.142 qpair failed and we were unable to recover it. 00:34:15.142 [2024-07-26 01:16:45.236739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.142 [2024-07-26 01:16:45.236770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.142 qpair failed and we were unable to recover it. 00:34:15.142 [2024-07-26 01:16:45.236874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.142 [2024-07-26 01:16:45.236901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.142 qpair failed and we were unable to recover it. 00:34:15.142 [2024-07-26 01:16:45.237011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.142 [2024-07-26 01:16:45.237039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.142 qpair failed and we were unable to recover it. 00:34:15.142 [2024-07-26 01:16:45.237200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.142 [2024-07-26 01:16:45.237226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.142 qpair failed and we were unable to recover it. 00:34:15.142 [2024-07-26 01:16:45.237336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.142 [2024-07-26 01:16:45.237362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.142 qpair failed and we were unable to recover it. 00:34:15.142 [2024-07-26 01:16:45.237481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.142 [2024-07-26 01:16:45.237508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.142 qpair failed and we were unable to recover it. 00:34:15.142 [2024-07-26 01:16:45.237611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.142 [2024-07-26 01:16:45.237638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.142 qpair failed and we were unable to recover it. 00:34:15.143 [2024-07-26 01:16:45.237739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.143 [2024-07-26 01:16:45.237767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.143 qpair failed and we were unable to recover it. 00:34:15.143 [2024-07-26 01:16:45.237863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.143 [2024-07-26 01:16:45.237891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.143 qpair failed and we were unable to recover it. 00:34:15.143 [2024-07-26 01:16:45.238037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.143 [2024-07-26 01:16:45.238085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.143 qpair failed and we were unable to recover it. 00:34:15.143 [2024-07-26 01:16:45.238224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.143 [2024-07-26 01:16:45.238252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.143 qpair failed and we were unable to recover it. 00:34:15.143 [2024-07-26 01:16:45.238421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.143 [2024-07-26 01:16:45.238449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.143 qpair failed and we were unable to recover it. 00:34:15.143 [2024-07-26 01:16:45.238557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.143 [2024-07-26 01:16:45.238584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.143 qpair failed and we were unable to recover it. 00:34:15.143 [2024-07-26 01:16:45.238687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.143 [2024-07-26 01:16:45.238715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.143 qpair failed and we were unable to recover it. 00:34:15.143 [2024-07-26 01:16:45.238914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.143 [2024-07-26 01:16:45.238942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.143 qpair failed and we were unable to recover it. 00:34:15.143 [2024-07-26 01:16:45.239084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.143 [2024-07-26 01:16:45.239122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.143 qpair failed and we were unable to recover it. 00:34:15.143 [2024-07-26 01:16:45.239263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.143 [2024-07-26 01:16:45.239290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.143 qpair failed and we were unable to recover it. 00:34:15.143 [2024-07-26 01:16:45.239412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.143 [2024-07-26 01:16:45.239440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.143 qpair failed and we were unable to recover it. 00:34:15.143 [2024-07-26 01:16:45.239586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.143 [2024-07-26 01:16:45.239623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.143 qpair failed and we were unable to recover it. 00:34:15.143 [2024-07-26 01:16:45.239732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.143 [2024-07-26 01:16:45.239768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.143 qpair failed and we were unable to recover it. 00:34:15.143 [2024-07-26 01:16:45.239920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.143 [2024-07-26 01:16:45.239948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.143 qpair failed and we were unable to recover it. 00:34:15.143 [2024-07-26 01:16:45.240087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.143 [2024-07-26 01:16:45.240120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.143 qpair failed and we were unable to recover it. 00:34:15.143 [2024-07-26 01:16:45.240272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.143 [2024-07-26 01:16:45.240300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.143 qpair failed and we were unable to recover it. 00:34:15.143 [2024-07-26 01:16:45.240445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.143 [2024-07-26 01:16:45.240472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.143 qpair failed and we were unable to recover it. 00:34:15.143 [2024-07-26 01:16:45.240680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.143 [2024-07-26 01:16:45.240708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.143 qpair failed and we were unable to recover it. 00:34:15.143 [2024-07-26 01:16:45.240859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.143 [2024-07-26 01:16:45.240889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.143 qpair failed and we were unable to recover it. 00:34:15.143 [2024-07-26 01:16:45.241009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.143 [2024-07-26 01:16:45.241036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.143 qpair failed and we were unable to recover it. 00:34:15.143 [2024-07-26 01:16:45.241194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.143 [2024-07-26 01:16:45.241235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.143 qpair failed and we were unable to recover it. 00:34:15.143 [2024-07-26 01:16:45.241346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.143 [2024-07-26 01:16:45.241374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.143 qpair failed and we were unable to recover it. 00:34:15.143 [2024-07-26 01:16:45.241508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.143 [2024-07-26 01:16:45.241536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.143 qpair failed and we were unable to recover it. 00:34:15.143 [2024-07-26 01:16:45.241677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.143 [2024-07-26 01:16:45.241703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.143 qpair failed and we were unable to recover it. 00:34:15.143 [2024-07-26 01:16:45.241813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.143 [2024-07-26 01:16:45.241840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.143 qpair failed and we were unable to recover it. 00:34:15.143 [2024-07-26 01:16:45.241945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.143 [2024-07-26 01:16:45.241972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.143 qpair failed and we were unable to recover it. 00:34:15.143 [2024-07-26 01:16:45.242090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.143 [2024-07-26 01:16:45.242117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.143 qpair failed and we were unable to recover it. 00:34:15.143 [2024-07-26 01:16:45.242222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.143 [2024-07-26 01:16:45.242248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.143 qpair failed and we were unable to recover it. 00:34:15.143 [2024-07-26 01:16:45.242359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.143 [2024-07-26 01:16:45.242386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.143 qpair failed and we were unable to recover it. 00:34:15.143 [2024-07-26 01:16:45.242491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.143 [2024-07-26 01:16:45.242518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.143 qpair failed and we were unable to recover it. 00:34:15.143 [2024-07-26 01:16:45.242622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.143 [2024-07-26 01:16:45.242649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.143 qpair failed and we were unable to recover it. 00:34:15.143 [2024-07-26 01:16:45.242781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.143 [2024-07-26 01:16:45.242807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.143 qpair failed and we were unable to recover it. 00:34:15.143 [2024-07-26 01:16:45.242937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.143 [2024-07-26 01:16:45.242964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.143 qpair failed and we were unable to recover it. 00:34:15.143 [2024-07-26 01:16:45.243078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.144 [2024-07-26 01:16:45.243117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.144 qpair failed and we were unable to recover it. 00:34:15.144 [2024-07-26 01:16:45.243234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.144 [2024-07-26 01:16:45.243261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.144 qpair failed and we were unable to recover it. 00:34:15.144 [2024-07-26 01:16:45.243417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.144 [2024-07-26 01:16:45.243444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.144 qpair failed and we were unable to recover it. 00:34:15.144 [2024-07-26 01:16:45.243590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.144 [2024-07-26 01:16:45.243617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.144 qpair failed and we were unable to recover it. 00:34:15.144 [2024-07-26 01:16:45.243717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.144 [2024-07-26 01:16:45.243744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.144 qpair failed and we were unable to recover it. 00:34:15.144 [2024-07-26 01:16:45.243861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.144 [2024-07-26 01:16:45.243888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.144 qpair failed and we were unable to recover it. 00:34:15.144 [2024-07-26 01:16:45.244018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.144 [2024-07-26 01:16:45.244045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.144 qpair failed and we were unable to recover it. 00:34:15.144 [2024-07-26 01:16:45.244166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.144 [2024-07-26 01:16:45.244193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.144 qpair failed and we were unable to recover it. 00:34:15.144 [2024-07-26 01:16:45.244323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.144 [2024-07-26 01:16:45.244350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.144 qpair failed and we were unable to recover it. 00:34:15.144 [2024-07-26 01:16:45.244482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.144 [2024-07-26 01:16:45.244508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.144 qpair failed and we were unable to recover it. 00:34:15.144 [2024-07-26 01:16:45.244613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.144 [2024-07-26 01:16:45.244639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.144 qpair failed and we were unable to recover it. 00:34:15.144 [2024-07-26 01:16:45.244747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.144 [2024-07-26 01:16:45.244773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.144 qpair failed and we were unable to recover it. 00:34:15.144 [2024-07-26 01:16:45.244921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.144 [2024-07-26 01:16:45.244947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.144 qpair failed and we were unable to recover it. 00:34:15.144 [2024-07-26 01:16:45.245046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.144 [2024-07-26 01:16:45.245077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.144 qpair failed and we were unable to recover it. 00:34:15.144 [2024-07-26 01:16:45.245220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.144 [2024-07-26 01:16:45.245250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.144 qpair failed and we were unable to recover it. 00:34:15.144 [2024-07-26 01:16:45.245420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.144 [2024-07-26 01:16:45.245461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.144 qpair failed and we were unable to recover it. 00:34:15.144 [2024-07-26 01:16:45.245599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.144 [2024-07-26 01:16:45.245629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.144 qpair failed and we were unable to recover it. 00:34:15.144 [2024-07-26 01:16:45.245767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.144 [2024-07-26 01:16:45.245795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.144 qpair failed and we were unable to recover it. 00:34:15.144 [2024-07-26 01:16:45.245919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.144 [2024-07-26 01:16:45.245947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.144 qpair failed and we were unable to recover it. 00:34:15.144 [2024-07-26 01:16:45.246093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.144 [2024-07-26 01:16:45.246121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.144 qpair failed and we were unable to recover it. 00:34:15.144 [2024-07-26 01:16:45.246236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.144 [2024-07-26 01:16:45.246262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.144 qpair failed and we were unable to recover it. 00:34:15.144 [2024-07-26 01:16:45.246413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.144 [2024-07-26 01:16:45.246441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.144 qpair failed and we were unable to recover it. 00:34:15.144 [2024-07-26 01:16:45.246584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.144 [2024-07-26 01:16:45.246611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.144 qpair failed and we were unable to recover it. 00:34:15.144 [2024-07-26 01:16:45.246753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.144 [2024-07-26 01:16:45.246781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.144 qpair failed and we were unable to recover it. 00:34:15.144 [2024-07-26 01:16:45.246893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.144 [2024-07-26 01:16:45.246921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.144 qpair failed and we were unable to recover it. 00:34:15.144 [2024-07-26 01:16:45.247031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.144 [2024-07-26 01:16:45.247057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.144 qpair failed and we were unable to recover it. 00:34:15.144 [2024-07-26 01:16:45.247192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.144 [2024-07-26 01:16:45.247218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.144 qpair failed and we were unable to recover it. 00:34:15.145 [2024-07-26 01:16:45.247355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.145 [2024-07-26 01:16:45.247382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.145 qpair failed and we were unable to recover it. 00:34:15.145 [2024-07-26 01:16:45.247523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.145 [2024-07-26 01:16:45.247550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.145 qpair failed and we were unable to recover it. 00:34:15.145 [2024-07-26 01:16:45.247652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.145 [2024-07-26 01:16:45.247678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.145 qpair failed and we were unable to recover it. 00:34:15.145 [2024-07-26 01:16:45.247806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.145 [2024-07-26 01:16:45.247833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.145 qpair failed and we were unable to recover it. 00:34:15.145 [2024-07-26 01:16:45.247968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.145 [2024-07-26 01:16:45.247994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.145 qpair failed and we were unable to recover it. 00:34:15.145 [2024-07-26 01:16:45.248145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.145 [2024-07-26 01:16:45.248185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.145 qpair failed and we were unable to recover it. 00:34:15.145 [2024-07-26 01:16:45.248294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.145 [2024-07-26 01:16:45.248322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.145 qpair failed and we were unable to recover it. 00:34:15.145 [2024-07-26 01:16:45.248434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.145 [2024-07-26 01:16:45.248461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.145 qpair failed and we were unable to recover it. 00:34:15.145 [2024-07-26 01:16:45.248566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.145 [2024-07-26 01:16:45.248594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.145 qpair failed and we were unable to recover it. 00:34:15.145 [2024-07-26 01:16:45.248697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.145 [2024-07-26 01:16:45.248724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.145 qpair failed and we were unable to recover it. 00:34:15.145 [2024-07-26 01:16:45.248850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.145 [2024-07-26 01:16:45.248878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.145 qpair failed and we were unable to recover it. 00:34:15.145 [2024-07-26 01:16:45.249018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.145 [2024-07-26 01:16:45.249046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.145 qpair failed and we were unable to recover it. 00:34:15.145 [2024-07-26 01:16:45.249176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.145 [2024-07-26 01:16:45.249202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.145 qpair failed and we were unable to recover it. 00:34:15.145 [2024-07-26 01:16:45.249318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.145 [2024-07-26 01:16:45.249356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.145 qpair failed and we were unable to recover it. 00:34:15.145 [2024-07-26 01:16:45.249488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.145 [2024-07-26 01:16:45.249527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.145 qpair failed and we were unable to recover it. 00:34:15.145 [2024-07-26 01:16:45.249672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.145 [2024-07-26 01:16:45.249699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.145 qpair failed and we were unable to recover it. 00:34:15.145 [2024-07-26 01:16:45.249815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.145 [2024-07-26 01:16:45.249842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.145 qpair failed and we were unable to recover it. 00:34:15.145 [2024-07-26 01:16:45.249991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.145 [2024-07-26 01:16:45.250017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.145 qpair failed and we were unable to recover it. 00:34:15.145 [2024-07-26 01:16:45.250145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.145 [2024-07-26 01:16:45.250176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.145 qpair failed and we were unable to recover it. 00:34:15.145 [2024-07-26 01:16:45.250318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.145 [2024-07-26 01:16:45.250346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.145 qpair failed and we were unable to recover it. 00:34:15.145 [2024-07-26 01:16:45.250457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.145 [2024-07-26 01:16:45.250483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.145 qpair failed and we were unable to recover it. 00:34:15.145 [2024-07-26 01:16:45.250657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.145 [2024-07-26 01:16:45.250683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.145 qpair failed and we were unable to recover it. 00:34:15.145 [2024-07-26 01:16:45.250804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.145 [2024-07-26 01:16:45.250830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.145 qpair failed and we were unable to recover it. 00:34:15.145 [2024-07-26 01:16:45.250941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.145 [2024-07-26 01:16:45.250967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.145 qpair failed and we were unable to recover it. 00:34:15.145 [2024-07-26 01:16:45.251071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.145 [2024-07-26 01:16:45.251098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.145 qpair failed and we were unable to recover it. 00:34:15.145 [2024-07-26 01:16:45.251197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.145 [2024-07-26 01:16:45.251223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.145 qpair failed and we were unable to recover it. 00:34:15.145 [2024-07-26 01:16:45.251359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.145 [2024-07-26 01:16:45.251385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.145 qpair failed and we were unable to recover it. 00:34:15.145 [2024-07-26 01:16:45.251516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.145 [2024-07-26 01:16:45.251543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.145 qpair failed and we were unable to recover it. 00:34:15.145 [2024-07-26 01:16:45.251689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.145 [2024-07-26 01:16:45.251716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.145 qpair failed and we were unable to recover it. 00:34:15.145 [2024-07-26 01:16:45.251838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.145 [2024-07-26 01:16:45.251865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.145 qpair failed and we were unable to recover it. 00:34:15.145 [2024-07-26 01:16:45.251988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.145 [2024-07-26 01:16:45.252030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.145 qpair failed and we were unable to recover it. 00:34:15.145 Malloc0 00:34:15.145 [2024-07-26 01:16:45.252183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.145 [2024-07-26 01:16:45.252212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.145 qpair failed and we were unable to recover it. 00:34:15.145 [2024-07-26 01:16:45.252322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.145 [2024-07-26 01:16:45.252349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.145 qpair failed and we were unable to recover it. 00:34:15.145 [2024-07-26 01:16:45.252476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.145 01:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.145 [2024-07-26 01:16:45.252503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.145 qpair failed and we were unable to recover it. 00:34:15.146 [2024-07-26 01:16:45.252624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.146 [2024-07-26 01:16:45.252651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.146 01:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:15.146 qpair failed and we were unable to recover it. 00:34:15.146 [2024-07-26 01:16:45.252766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.146 01:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.146 [2024-07-26 01:16:45.252804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.146 qpair failed and we were unable to recover it. 00:34:15.146 [2024-07-26 01:16:45.252926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.146 01:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:15.146 [2024-07-26 01:16:45.252952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.146 qpair failed and we were unable to recover it. 00:34:15.146 [2024-07-26 01:16:45.253075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.146 [2024-07-26 01:16:45.253104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.146 qpair failed and we were unable to recover it. 00:34:15.146 [2024-07-26 01:16:45.253228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.146 [2024-07-26 01:16:45.253255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.146 qpair failed and we were unable to recover it. 00:34:15.146 [2024-07-26 01:16:45.253357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.146 [2024-07-26 01:16:45.253390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.146 qpair failed and we were unable to recover it. 00:34:15.146 [2024-07-26 01:16:45.253498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.146 [2024-07-26 01:16:45.253524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.146 qpair failed and we were unable to recover it. 00:34:15.146 [2024-07-26 01:16:45.253651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.146 [2024-07-26 01:16:45.253678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.146 qpair failed and we were unable to recover it. 00:34:15.146 [2024-07-26 01:16:45.253792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.146 [2024-07-26 01:16:45.253821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.146 qpair failed and we were unable to recover it. 00:34:15.146 [2024-07-26 01:16:45.253927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.146 [2024-07-26 01:16:45.253954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.146 qpair failed and we were unable to recover it. 00:34:15.146 [2024-07-26 01:16:45.254057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.146 [2024-07-26 01:16:45.254090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.146 qpair failed and we were unable to recover it. 00:34:15.146 [2024-07-26 01:16:45.254220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.146 [2024-07-26 01:16:45.254246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.146 qpair failed and we were unable to recover it. 00:34:15.146 [2024-07-26 01:16:45.254350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.146 [2024-07-26 01:16:45.254378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.146 qpair failed and we were unable to recover it. 00:34:15.146 [2024-07-26 01:16:45.254491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.146 [2024-07-26 01:16:45.254519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.146 qpair failed and we were unable to recover it. 00:34:15.146 [2024-07-26 01:16:45.254617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.146 [2024-07-26 01:16:45.254645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.146 qpair failed and we were unable to recover it. 00:34:15.146 [2024-07-26 01:16:45.254778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.146 [2024-07-26 01:16:45.254805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.146 qpair failed and we were unable to recover it. 00:34:15.146 [2024-07-26 01:16:45.254919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.146 [2024-07-26 01:16:45.254947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.146 qpair failed and we were unable to recover it. 00:34:15.146 [2024-07-26 01:16:45.255057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.146 [2024-07-26 01:16:45.255090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.146 qpair failed and we were unable to recover it. 00:34:15.146 [2024-07-26 01:16:45.255209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.146 [2024-07-26 01:16:45.255235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.146 qpair failed and we were unable to recover it. 00:34:15.146 [2024-07-26 01:16:45.255361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.146 [2024-07-26 01:16:45.255388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.146 qpair failed and we were unable to recover it. 00:34:15.146 [2024-07-26 01:16:45.255489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.146 [2024-07-26 01:16:45.255516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.146 qpair failed and we were unable to recover it. 00:34:15.146 [2024-07-26 01:16:45.255631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.146 [2024-07-26 01:16:45.255658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.146 qpair failed and we were unable to recover it. 00:34:15.146 [2024-07-26 01:16:45.255757] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:15.146 [2024-07-26 01:16:45.255794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.146 [2024-07-26 01:16:45.255820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.146 qpair failed and we were unable to recover it. 00:34:15.146 [2024-07-26 01:16:45.255932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.146 [2024-07-26 01:16:45.255969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.146 qpair failed and we were unable to recover it. 00:34:15.146 [2024-07-26 01:16:45.256113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.146 [2024-07-26 01:16:45.256139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.146 qpair failed and we were unable to recover it. 00:34:15.146 [2024-07-26 01:16:45.256245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.146 [2024-07-26 01:16:45.256272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.146 qpair failed and we were unable to recover it. 00:34:15.146 [2024-07-26 01:16:45.256394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.146 [2024-07-26 01:16:45.256420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.146 qpair failed and we were unable to recover it. 00:34:15.146 [2024-07-26 01:16:45.256541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.146 [2024-07-26 01:16:45.256567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.146 qpair failed and we were unable to recover it. 00:34:15.146 [2024-07-26 01:16:45.256718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.146 [2024-07-26 01:16:45.256755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.146 qpair failed and we were unable to recover it. 00:34:15.146 [2024-07-26 01:16:45.256903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.146 [2024-07-26 01:16:45.256930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.146 qpair failed and we were unable to recover it. 00:34:15.146 [2024-07-26 01:16:45.257042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.146 [2024-07-26 01:16:45.257088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.146 qpair failed and we were unable to recover it. 00:34:15.146 [2024-07-26 01:16:45.257193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.146 [2024-07-26 01:16:45.257219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.146 qpair failed and we were unable to recover it. 00:34:15.146 [2024-07-26 01:16:45.257328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.146 [2024-07-26 01:16:45.257354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.146 qpair failed and we were unable to recover it. 00:34:15.146 [2024-07-26 01:16:45.257502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.146 [2024-07-26 01:16:45.257529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.146 qpair failed and we were unable to recover it. 00:34:15.146 [2024-07-26 01:16:45.257661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.146 [2024-07-26 01:16:45.257688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.147 qpair failed and we were unable to recover it. 00:34:15.147 [2024-07-26 01:16:45.257798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.147 [2024-07-26 01:16:45.257826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.147 qpair failed and we were unable to recover it. 00:34:15.147 [2024-07-26 01:16:45.257937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.147 [2024-07-26 01:16:45.257964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.147 qpair failed and we were unable to recover it. 00:34:15.147 [2024-07-26 01:16:45.258075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.147 [2024-07-26 01:16:45.258113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.147 qpair failed and we were unable to recover it. 00:34:15.147 [2024-07-26 01:16:45.258237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.147 [2024-07-26 01:16:45.258264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.147 qpair failed and we were unable to recover it. 00:34:15.147 [2024-07-26 01:16:45.258421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.147 [2024-07-26 01:16:45.258448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.147 qpair failed and we were unable to recover it. 00:34:15.147 [2024-07-26 01:16:45.258564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.147 [2024-07-26 01:16:45.258591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.147 qpair failed and we were unable to recover it. 00:34:15.147 [2024-07-26 01:16:45.258702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.147 [2024-07-26 01:16:45.258731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.147 qpair failed and we were unable to recover it. 00:34:15.147 [2024-07-26 01:16:45.258836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.147 [2024-07-26 01:16:45.258863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.147 qpair failed and we were unable to recover it. 00:34:15.147 [2024-07-26 01:16:45.258977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.147 [2024-07-26 01:16:45.259018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.147 qpair failed and we were unable to recover it. 00:34:15.147 [2024-07-26 01:16:45.259174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.147 [2024-07-26 01:16:45.259207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.147 qpair failed and we were unable to recover it. 00:34:15.147 [2024-07-26 01:16:45.259348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.147 [2024-07-26 01:16:45.259382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.147 qpair failed and we were unable to recover it. 00:34:15.147 [2024-07-26 01:16:45.259491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.147 [2024-07-26 01:16:45.259519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.147 qpair failed and we were unable to recover it. 00:34:15.147 [2024-07-26 01:16:45.259625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.147 [2024-07-26 01:16:45.259653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.147 qpair failed and we were unable to recover it. 00:34:15.147 [2024-07-26 01:16:45.259797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.147 [2024-07-26 01:16:45.259825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.147 qpair failed and we were unable to recover it. 00:34:15.147 [2024-07-26 01:16:45.259945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.147 [2024-07-26 01:16:45.259974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.147 qpair failed and we were unable to recover it. 00:34:15.147 [2024-07-26 01:16:45.260084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.147 [2024-07-26 01:16:45.260124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.147 qpair failed and we were unable to recover it. 00:34:15.147 [2024-07-26 01:16:45.260262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.147 [2024-07-26 01:16:45.260288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.147 qpair failed and we were unable to recover it. 00:34:15.147 [2024-07-26 01:16:45.260400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.147 [2024-07-26 01:16:45.260427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.147 qpair failed and we were unable to recover it. 00:34:15.147 [2024-07-26 01:16:45.260590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.147 [2024-07-26 01:16:45.260617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.147 qpair failed and we were unable to recover it. 00:34:15.147 [2024-07-26 01:16:45.260753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.147 [2024-07-26 01:16:45.260780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.147 qpair failed and we were unable to recover it. 00:34:15.147 [2024-07-26 01:16:45.260917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.147 [2024-07-26 01:16:45.260944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.147 qpair failed and we were unable to recover it. 00:34:15.147 [2024-07-26 01:16:45.261057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.147 [2024-07-26 01:16:45.261090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.147 qpair failed and we were unable to recover it. 00:34:15.147 [2024-07-26 01:16:45.261221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.147 [2024-07-26 01:16:45.261248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.147 qpair failed and we were unable to recover it. 00:34:15.147 [2024-07-26 01:16:45.261385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.147 [2024-07-26 01:16:45.261412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.147 qpair failed and we were unable to recover it. 00:34:15.147 [2024-07-26 01:16:45.261529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.147 [2024-07-26 01:16:45.261556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.147 qpair failed and we were unable to recover it. 00:34:15.147 [2024-07-26 01:16:45.261687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.147 [2024-07-26 01:16:45.261714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.147 qpair failed and we were unable to recover it. 00:34:15.147 [2024-07-26 01:16:45.261836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.147 [2024-07-26 01:16:45.261863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.147 qpair failed and we were unable to recover it. 00:34:15.147 [2024-07-26 01:16:45.261975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.147 [2024-07-26 01:16:45.262002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.147 qpair failed and we were unable to recover it. 00:34:15.147 [2024-07-26 01:16:45.262121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.147 [2024-07-26 01:16:45.262148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.147 qpair failed and we were unable to recover it. 00:34:15.147 [2024-07-26 01:16:45.262257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.147 [2024-07-26 01:16:45.262284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.147 qpair failed and we were unable to recover it. 00:34:15.147 [2024-07-26 01:16:45.262392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.147 [2024-07-26 01:16:45.262420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.147 qpair failed and we were unable to recover it. 00:34:15.147 [2024-07-26 01:16:45.262556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.147 [2024-07-26 01:16:45.262583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.147 qpair failed and we were unable to recover it. 00:34:15.147 [2024-07-26 01:16:45.262692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.147 [2024-07-26 01:16:45.262720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.147 qpair failed and we were unable to recover it. 00:34:15.147 [2024-07-26 01:16:45.262849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.147 [2024-07-26 01:16:45.262876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.147 qpair failed and we were unable to recover it. 00:34:15.147 [2024-07-26 01:16:45.262979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.147 [2024-07-26 01:16:45.263006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.147 qpair failed and we were unable to recover it. 00:34:15.147 [2024-07-26 01:16:45.263107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.147 [2024-07-26 01:16:45.263134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.147 qpair failed and we were unable to recover it. 00:34:15.147 [2024-07-26 01:16:45.263272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.148 [2024-07-26 01:16:45.263298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.148 qpair failed and we were unable to recover it. 00:34:15.148 [2024-07-26 01:16:45.263418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.148 [2024-07-26 01:16:45.263446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.148 qpair failed and we were unable to recover it. 00:34:15.148 [2024-07-26 01:16:45.263552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.148 [2024-07-26 01:16:45.263579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.148 qpair failed and we were unable to recover it. 00:34:15.148 [2024-07-26 01:16:45.263704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.148 [2024-07-26 01:16:45.263731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.148 qpair failed and we were unable to recover it. 00:34:15.148 [2024-07-26 01:16:45.263863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.148 [2024-07-26 01:16:45.263891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.148 qpair failed and we were unable to recover it. 00:34:15.148 01:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.148 [2024-07-26 01:16:45.264001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.148 [2024-07-26 01:16:45.264029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.148 qpair failed and we were unable to recover it. 00:34:15.148 01:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:15.148 [2024-07-26 01:16:45.264156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.148 [2024-07-26 01:16:45.264183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.148 qpair failed and we were unable to recover it. 00:34:15.148 01:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.148 [2024-07-26 01:16:45.264323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.148 [2024-07-26 01:16:45.264361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.148 qpair failed and we were unable to recover it. 00:34:15.148 01:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:15.148 [2024-07-26 01:16:45.264475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.148 [2024-07-26 01:16:45.264502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.148 qpair failed and we were unable to recover it. 00:34:15.148 [2024-07-26 01:16:45.264648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.148 [2024-07-26 01:16:45.264675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.148 qpair failed and we were unable to recover it. 00:34:15.148 [2024-07-26 01:16:45.264790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.148 [2024-07-26 01:16:45.264818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.148 qpair failed and we were unable to recover it. 00:34:15.148 [2024-07-26 01:16:45.264927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.148 [2024-07-26 01:16:45.264954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.148 qpair failed and we were unable to recover it. 00:34:15.148 [2024-07-26 01:16:45.265070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.148 [2024-07-26 01:16:45.265097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.148 qpair failed and we were unable to recover it. 00:34:15.148 [2024-07-26 01:16:45.265240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.148 [2024-07-26 01:16:45.265267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.148 qpair failed and we were unable to recover it. 00:34:15.148 [2024-07-26 01:16:45.265418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.148 [2024-07-26 01:16:45.265446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.148 qpair failed and we were unable to recover it. 00:34:15.148 [2024-07-26 01:16:45.265552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.148 [2024-07-26 01:16:45.265580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.148 qpair failed and we were unable to recover it. 00:34:15.148 [2024-07-26 01:16:45.265698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.148 [2024-07-26 01:16:45.265739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.148 qpair failed and we were unable to recover it. 00:34:15.148 [2024-07-26 01:16:45.265856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.148 [2024-07-26 01:16:45.265885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.148 qpair failed and we were unable to recover it. 00:34:15.148 [2024-07-26 01:16:45.266019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.148 [2024-07-26 01:16:45.266047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.148 qpair failed and we were unable to recover it. 00:34:15.148 [2024-07-26 01:16:45.266179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.148 [2024-07-26 01:16:45.266208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.148 qpair failed and we were unable to recover it. 00:34:15.148 [2024-07-26 01:16:45.266323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.148 [2024-07-26 01:16:45.266361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.148 qpair failed and we were unable to recover it. 00:34:15.148 [2024-07-26 01:16:45.266466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.148 [2024-07-26 01:16:45.266494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.148 qpair failed and we were unable to recover it. 00:34:15.148 [2024-07-26 01:16:45.266610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.148 [2024-07-26 01:16:45.266638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.148 qpair failed and we were unable to recover it. 00:34:15.148 [2024-07-26 01:16:45.266751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.148 [2024-07-26 01:16:45.266779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.148 qpair failed and we were unable to recover it. 00:34:15.148 [2024-07-26 01:16:45.266884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.148 [2024-07-26 01:16:45.266911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.148 qpair failed and we were unable to recover it. 00:34:15.148 [2024-07-26 01:16:45.267019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.148 [2024-07-26 01:16:45.267046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.148 qpair failed and we were unable to recover it. 00:34:15.148 [2024-07-26 01:16:45.267179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.148 [2024-07-26 01:16:45.267218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.148 qpair failed and we were unable to recover it. 00:34:15.148 [2024-07-26 01:16:45.267365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.148 [2024-07-26 01:16:45.267393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.148 qpair failed and we were unable to recover it. 00:34:15.148 [2024-07-26 01:16:45.267536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.148 [2024-07-26 01:16:45.267565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.148 qpair failed and we were unable to recover it. 00:34:15.148 [2024-07-26 01:16:45.267700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.148 [2024-07-26 01:16:45.267727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.148 qpair failed and we were unable to recover it. 00:34:15.148 [2024-07-26 01:16:45.267863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.148 [2024-07-26 01:16:45.267889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.148 qpair failed and we were unable to recover it. 00:34:15.148 [2024-07-26 01:16:45.267993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.148 [2024-07-26 01:16:45.268020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.148 qpair failed and we were unable to recover it. 00:34:15.148 [2024-07-26 01:16:45.268131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.148 [2024-07-26 01:16:45.268157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.148 qpair failed and we were unable to recover it. 00:34:15.148 [2024-07-26 01:16:45.268267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.148 [2024-07-26 01:16:45.268293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.149 qpair failed and we were unable to recover it. 00:34:15.149 [2024-07-26 01:16:45.268402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.149 [2024-07-26 01:16:45.268429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.149 qpair failed and we were unable to recover it. 00:34:15.149 [2024-07-26 01:16:45.268540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.149 [2024-07-26 01:16:45.268569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.149 qpair failed and we were unable to recover it. 00:34:15.149 [2024-07-26 01:16:45.268733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.149 [2024-07-26 01:16:45.268775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba60000b90 with addr=10.0.0.2, port=4420 00:34:15.149 qpair failed and we were unable to recover it. 00:34:15.149 [2024-07-26 01:16:45.268896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.149 [2024-07-26 01:16:45.268938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.149 qpair failed and we were unable to recover it. 00:34:15.149 [2024-07-26 01:16:45.269055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.149 [2024-07-26 01:16:45.269092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.149 qpair failed and we were unable to recover it. 00:34:15.149 [2024-07-26 01:16:45.269195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.149 [2024-07-26 01:16:45.269227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.149 qpair failed and we were unable to recover it. 00:34:15.149 [2024-07-26 01:16:45.269349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.149 [2024-07-26 01:16:45.269378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.149 qpair failed and we were unable to recover it. 00:34:15.149 [2024-07-26 01:16:45.269518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.149 [2024-07-26 01:16:45.269545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.149 qpair failed and we were unable to recover it. 00:34:15.149 [2024-07-26 01:16:45.269648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.149 [2024-07-26 01:16:45.269676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.149 qpair failed and we were unable to recover it. 00:34:15.149 [2024-07-26 01:16:45.269816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.149 [2024-07-26 01:16:45.269844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba58000b90 with addr=10.0.0.2, port=4420 00:34:15.149 qpair failed and we were unable to recover it. 00:34:15.149 [2024-07-26 01:16:45.269949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.149 [2024-07-26 01:16:45.269977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.149 qpair failed and we were unable to recover it. 00:34:15.149 [2024-07-26 01:16:45.270094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.149 [2024-07-26 01:16:45.270124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.149 qpair failed and we were unable to recover it. 00:34:15.149 [2024-07-26 01:16:45.270225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.149 [2024-07-26 01:16:45.270251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.149 qpair failed and we were unable to recover it. 00:34:15.149 [2024-07-26 01:16:45.270366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.149 [2024-07-26 01:16:45.270393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.149 qpair failed and we were unable to recover it. 00:34:15.149 [2024-07-26 01:16:45.270495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.149 [2024-07-26 01:16:45.270521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.149 qpair failed and we were unable to recover it. 00:34:15.149 [2024-07-26 01:16:45.270648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.149 [2024-07-26 01:16:45.270675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.149 qpair failed and we were unable to recover it. 00:34:15.149 [2024-07-26 01:16:45.270805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.149 [2024-07-26 01:16:45.270831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.149 qpair failed and we were unable to recover it. 00:34:15.149 [2024-07-26 01:16:45.270971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.149 [2024-07-26 01:16:45.270997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.149 qpair failed and we were unable to recover it. 00:34:15.149 [2024-07-26 01:16:45.271106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.149 [2024-07-26 01:16:45.271132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.149 qpair failed and we were unable to recover it. 00:34:15.149 [2024-07-26 01:16:45.271252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.149 [2024-07-26 01:16:45.271278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.149 qpair failed and we were unable to recover it. 00:34:15.149 [2024-07-26 01:16:45.271408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.149 [2024-07-26 01:16:45.271434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.149 qpair failed and we were unable to recover it. 00:34:15.149 [2024-07-26 01:16:45.271540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.149 [2024-07-26 01:16:45.271567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.149 qpair failed and we were unable to recover it. 00:34:15.149 [2024-07-26 01:16:45.271675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.149 [2024-07-26 01:16:45.271702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.149 qpair failed and we were unable to recover it. 00:34:15.149 [2024-07-26 01:16:45.271808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.149 [2024-07-26 01:16:45.271834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.149 qpair failed and we were unable to recover it. 00:34:15.149 [2024-07-26 01:16:45.271964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.149 01:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.149 [2024-07-26 01:16:45.271992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.149 qpair failed and we were unable to recover it. 00:34:15.149 [2024-07-26 01:16:45.272099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.149 [2024-07-26 01:16:45.272126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.149 01:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:15.149 qpair failed and we were unable to recover it. 00:34:15.149 [2024-07-26 01:16:45.272254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.149 01:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.149 [2024-07-26 01:16:45.272281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.149 qpair failed and we were unable to recover it. 00:34:15.150 01:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:15.150 [2024-07-26 01:16:45.272388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.150 [2024-07-26 01:16:45.272414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.150 qpair failed and we were unable to recover it. 00:34:15.150 [2024-07-26 01:16:45.272517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.150 [2024-07-26 01:16:45.272543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.150 qpair failed and we were unable to recover it. 00:34:15.150 [2024-07-26 01:16:45.272657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.150 [2024-07-26 01:16:45.272683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.150 qpair failed and we were unable to recover it. 00:34:15.150 [2024-07-26 01:16:45.272807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.150 [2024-07-26 01:16:45.272833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.150 qpair failed and we were unable to recover it. 00:34:15.150 [2024-07-26 01:16:45.272944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.150 [2024-07-26 01:16:45.272971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.150 qpair failed and we were unable to recover it. 00:34:15.150 [2024-07-26 01:16:45.273093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.150 [2024-07-26 01:16:45.273121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.150 qpair failed and we were unable to recover it. 00:34:15.150 [2024-07-26 01:16:45.273220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.150 [2024-07-26 01:16:45.273246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.150 qpair failed and we were unable to recover it. 00:34:15.150 [2024-07-26 01:16:45.273380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.150 [2024-07-26 01:16:45.273406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.150 qpair failed and we were unable to recover it. 00:34:15.150 [2024-07-26 01:16:45.273514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.150 [2024-07-26 01:16:45.273540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.150 qpair failed and we were unable to recover it. 00:34:15.150 [2024-07-26 01:16:45.273676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.150 [2024-07-26 01:16:45.273703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.150 qpair failed and we were unable to recover it. 00:34:15.150 [2024-07-26 01:16:45.273805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.150 [2024-07-26 01:16:45.273831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.150 qpair failed and we were unable to recover it. 00:34:15.150 [2024-07-26 01:16:45.273973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.150 [2024-07-26 01:16:45.273999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.150 qpair failed and we were unable to recover it. 00:34:15.150 [2024-07-26 01:16:45.274100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.150 [2024-07-26 01:16:45.274127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.150 qpair failed and we were unable to recover it. 00:34:15.150 [2024-07-26 01:16:45.274262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.150 [2024-07-26 01:16:45.274288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.150 qpair failed and we were unable to recover it. 00:34:15.150 [2024-07-26 01:16:45.274440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.150 [2024-07-26 01:16:45.274466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.150 qpair failed and we were unable to recover it. 00:34:15.150 [2024-07-26 01:16:45.274610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.150 [2024-07-26 01:16:45.274637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.150 qpair failed and we were unable to recover it. 00:34:15.150 [2024-07-26 01:16:45.274743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.150 [2024-07-26 01:16:45.274769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.150 qpair failed and we were unable to recover it. 00:34:15.150 [2024-07-26 01:16:45.274866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.150 [2024-07-26 01:16:45.274897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd70600 with addr=10.0.0.2, port=4420 00:34:15.150 qpair failed and we were unable to recover it. 00:34:15.150 [2024-07-26 01:16:45.275013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.150 [2024-07-26 01:16:45.275054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.150 qpair failed and we were unable to recover it. 00:34:15.150 [2024-07-26 01:16:45.275178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.150 [2024-07-26 01:16:45.275206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.150 qpair failed and we were unable to recover it. 00:34:15.150 [2024-07-26 01:16:45.275323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.150 [2024-07-26 01:16:45.275351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.150 qpair failed and we were unable to recover it. 00:34:15.150 [2024-07-26 01:16:45.275513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.150 [2024-07-26 01:16:45.275540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.150 qpair failed and we were unable to recover it. 00:34:15.150 [2024-07-26 01:16:45.275650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.150 [2024-07-26 01:16:45.275677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.150 qpair failed and we were unable to recover it. 00:34:15.150 [2024-07-26 01:16:45.275792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.150 [2024-07-26 01:16:45.275818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.150 qpair failed and we were unable to recover it. 00:34:15.150 [2024-07-26 01:16:45.275956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.150 [2024-07-26 01:16:45.275983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.150 qpair failed and we were unable to recover it. 00:34:15.150 [2024-07-26 01:16:45.276125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.150 [2024-07-26 01:16:45.276153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.150 qpair failed and we were unable to recover it. 00:34:15.150 [2024-07-26 01:16:45.276267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.150 [2024-07-26 01:16:45.276294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.150 qpair failed and we were unable to recover it. 00:34:15.150 [2024-07-26 01:16:45.276442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.150 [2024-07-26 01:16:45.276469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.150 qpair failed and we were unable to recover it. 00:34:15.150 [2024-07-26 01:16:45.276612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.150 [2024-07-26 01:16:45.276638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.150 qpair failed and we were unable to recover it. 00:34:15.150 [2024-07-26 01:16:45.276755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.150 [2024-07-26 01:16:45.276783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.150 qpair failed and we were unable to recover it. 00:34:15.150 [2024-07-26 01:16:45.276887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.150 [2024-07-26 01:16:45.276913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.150 qpair failed and we were unable to recover it. 00:34:15.150 [2024-07-26 01:16:45.277057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.150 [2024-07-26 01:16:45.277088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.150 qpair failed and we were unable to recover it. 00:34:15.150 [2024-07-26 01:16:45.277200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.150 [2024-07-26 01:16:45.277226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.150 qpair failed and we were unable to recover it. 00:34:15.150 [2024-07-26 01:16:45.277365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.150 [2024-07-26 01:16:45.277392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.150 qpair failed and we were unable to recover it. 00:34:15.150 [2024-07-26 01:16:45.277504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.150 [2024-07-26 01:16:45.277530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.150 qpair failed and we were unable to recover it. 00:34:15.150 [2024-07-26 01:16:45.277691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.151 [2024-07-26 01:16:45.277717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.151 qpair failed and we were unable to recover it. 00:34:15.151 [2024-07-26 01:16:45.277830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.151 [2024-07-26 01:16:45.277856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.151 qpair failed and we were unable to recover it. 00:34:15.151 [2024-07-26 01:16:45.277993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.151 [2024-07-26 01:16:45.278019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.151 qpair failed and we were unable to recover it. 00:34:15.151 [2024-07-26 01:16:45.278140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.151 [2024-07-26 01:16:45.278166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.151 qpair failed and we were unable to recover it. 00:34:15.151 [2024-07-26 01:16:45.278276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.151 [2024-07-26 01:16:45.278303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.151 qpair failed and we were unable to recover it. 00:34:15.151 [2024-07-26 01:16:45.278431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.151 [2024-07-26 01:16:45.278459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.151 qpair failed and we were unable to recover it. 00:34:15.151 [2024-07-26 01:16:45.278599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.151 [2024-07-26 01:16:45.278626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.151 qpair failed and we were unable to recover it. 00:34:15.151 [2024-07-26 01:16:45.278736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.151 [2024-07-26 01:16:45.278763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.151 qpair failed and we were unable to recover it. 00:34:15.151 [2024-07-26 01:16:45.278866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.151 [2024-07-26 01:16:45.278893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.151 qpair failed and we were unable to recover it. 00:34:15.151 [2024-07-26 01:16:45.279008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.151 [2024-07-26 01:16:45.279034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.151 qpair failed and we were unable to recover it. 00:34:15.151 [2024-07-26 01:16:45.279153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.151 [2024-07-26 01:16:45.279179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.151 qpair failed and we were unable to recover it. 00:34:15.151 [2024-07-26 01:16:45.279307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.151 [2024-07-26 01:16:45.279344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.151 qpair failed and we were unable to recover it. 00:34:15.151 [2024-07-26 01:16:45.279459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.151 [2024-07-26 01:16:45.279485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.151 qpair failed and we were unable to recover it. 00:34:15.151 [2024-07-26 01:16:45.279597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.151 [2024-07-26 01:16:45.279623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.151 qpair failed and we were unable to recover it. 00:34:15.151 [2024-07-26 01:16:45.279736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.151 [2024-07-26 01:16:45.279762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.151 qpair failed and we were unable to recover it. 00:34:15.151 [2024-07-26 01:16:45.279862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.151 [2024-07-26 01:16:45.279888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.151 qpair failed and we were unable to recover it. 00:34:15.151 [2024-07-26 01:16:45.279999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.151 01:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.151 [2024-07-26 01:16:45.280026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.151 qpair failed and we were unable to recover it. 00:34:15.151 [2024-07-26 01:16:45.280164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.151 01:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:15.151 [2024-07-26 01:16:45.280192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.151 qpair failed and we were unable to recover it. 00:34:15.151 01:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.151 [2024-07-26 01:16:45.280297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.151 [2024-07-26 01:16:45.280329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.151 qpair failed and we were unable to recover it. 00:34:15.151 01:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:15.151 [2024-07-26 01:16:45.280444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.151 [2024-07-26 01:16:45.280472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.151 qpair failed and we were unable to recover it. 00:34:15.151 [2024-07-26 01:16:45.280576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.151 [2024-07-26 01:16:45.280607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.151 qpair failed and we were unable to recover it. 00:34:15.151 [2024-07-26 01:16:45.280719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.151 [2024-07-26 01:16:45.280745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.151 qpair failed and we were unable to recover it. 00:34:15.151 [2024-07-26 01:16:45.280885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.151 [2024-07-26 01:16:45.280912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.151 qpair failed and we were unable to recover it. 00:34:15.151 [2024-07-26 01:16:45.281016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.151 [2024-07-26 01:16:45.281043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.151 qpair failed and we were unable to recover it. 00:34:15.151 [2024-07-26 01:16:45.281159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.151 [2024-07-26 01:16:45.281185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.151 qpair failed and we were unable to recover it. 00:34:15.151 [2024-07-26 01:16:45.281298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.151 [2024-07-26 01:16:45.281331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.151 qpair failed and we were unable to recover it. 00:34:15.151 [2024-07-26 01:16:45.281437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.151 [2024-07-26 01:16:45.281463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.151 qpair failed and we were unable to recover it. 00:34:15.151 [2024-07-26 01:16:45.281569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.151 [2024-07-26 01:16:45.281597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.151 qpair failed and we were unable to recover it. 00:34:15.151 [2024-07-26 01:16:45.281707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.151 [2024-07-26 01:16:45.281733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.151 qpair failed and we were unable to recover it. 00:34:15.151 [2024-07-26 01:16:45.281865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.151 [2024-07-26 01:16:45.281891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.151 qpair failed and we were unable to recover it. 00:34:15.151 [2024-07-26 01:16:45.281992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.151 [2024-07-26 01:16:45.282018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.151 qpair failed and we were unable to recover it. 00:34:15.151 [2024-07-26 01:16:45.282146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.151 [2024-07-26 01:16:45.282173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.151 qpair failed and we were unable to recover it. 00:34:15.151 [2024-07-26 01:16:45.282282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.151 [2024-07-26 01:16:45.282308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.151 qpair failed and we were unable to recover it. 00:34:15.151 [2024-07-26 01:16:45.282426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.151 [2024-07-26 01:16:45.282452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.151 qpair failed and we were unable to recover it. 00:34:15.152 [2024-07-26 01:16:45.282574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.152 [2024-07-26 01:16:45.282600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.152 qpair failed and we were unable to recover it. 00:34:15.152 [2024-07-26 01:16:45.282730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.152 [2024-07-26 01:16:45.282756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.152 qpair failed and we were unable to recover it. 00:34:15.152 [2024-07-26 01:16:45.282863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.152 [2024-07-26 01:16:45.282890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.152 qpair failed and we were unable to recover it. 00:34:15.152 [2024-07-26 01:16:45.282998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.152 [2024-07-26 01:16:45.283025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.152 qpair failed and we were unable to recover it. 00:34:15.152 [2024-07-26 01:16:45.283151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.152 [2024-07-26 01:16:45.283178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.152 qpair failed and we were unable to recover it. 00:34:15.152 [2024-07-26 01:16:45.283295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.152 [2024-07-26 01:16:45.283330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.152 qpair failed and we were unable to recover it. 00:34:15.152 [2024-07-26 01:16:45.283438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.152 [2024-07-26 01:16:45.283465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.152 qpair failed and we were unable to recover it. 00:34:15.152 [2024-07-26 01:16:45.283594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.152 [2024-07-26 01:16:45.283621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.152 qpair failed and we were unable to recover it. 00:34:15.152 [2024-07-26 01:16:45.283727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.152 [2024-07-26 01:16:45.283754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.152 qpair failed and we were unable to recover it. 00:34:15.152 [2024-07-26 01:16:45.283860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.152 [2024-07-26 01:16:45.283887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fba68000b90 with addr=10.0.0.2, port=4420 00:34:15.152 qpair failed and we were unable to recover it. 00:34:15.152 [2024-07-26 01:16:45.284018] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:15.152 [2024-07-26 01:16:45.286521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.152 [2024-07-26 01:16:45.286672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.152 [2024-07-26 01:16:45.286717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.152 [2024-07-26 01:16:45.286738] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.152 [2024-07-26 01:16:45.286752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.152 [2024-07-26 01:16:45.286803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.152 qpair failed and we were unable to recover it. 00:34:15.152 01:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.152 01:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:15.152 01:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.152 01:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:15.152 01:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.152 01:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1984005 00:34:15.152 [2024-07-26 01:16:45.296351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.152 [2024-07-26 01:16:45.296464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.152 [2024-07-26 01:16:45.296493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.152 [2024-07-26 01:16:45.296509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.152 [2024-07-26 01:16:45.296522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.152 [2024-07-26 01:16:45.296566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.152 qpair failed and we were unable to recover it. 00:34:15.152 [2024-07-26 01:16:45.306393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.152 [2024-07-26 01:16:45.306509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.152 [2024-07-26 01:16:45.306537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.152 [2024-07-26 01:16:45.306553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.152 [2024-07-26 01:16:45.306567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.152 [2024-07-26 01:16:45.306598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.152 qpair failed and we were unable to recover it. 00:34:15.152 [2024-07-26 01:16:45.316408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.152 [2024-07-26 01:16:45.316553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.152 [2024-07-26 01:16:45.316581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.152 [2024-07-26 01:16:45.316596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.152 [2024-07-26 01:16:45.316610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.152 [2024-07-26 01:16:45.316640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.152 qpair failed and we were unable to recover it. 00:34:15.152 [2024-07-26 01:16:45.326468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.152 [2024-07-26 01:16:45.326592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.152 [2024-07-26 01:16:45.326619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.152 [2024-07-26 01:16:45.326642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.152 [2024-07-26 01:16:45.326657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.152 [2024-07-26 01:16:45.326721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.152 qpair failed and we were unable to recover it. 00:34:15.152 [2024-07-26 01:16:45.336396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.152 [2024-07-26 01:16:45.336503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.152 [2024-07-26 01:16:45.336529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.152 [2024-07-26 01:16:45.336544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.152 [2024-07-26 01:16:45.336558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.152 [2024-07-26 01:16:45.336588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.152 qpair failed and we were unable to recover it. 00:34:15.152 [2024-07-26 01:16:45.346483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.152 [2024-07-26 01:16:45.346595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.152 [2024-07-26 01:16:45.346621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.152 [2024-07-26 01:16:45.346636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.152 [2024-07-26 01:16:45.346650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.152 [2024-07-26 01:16:45.346681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.152 qpair failed and we were unable to recover it. 00:34:15.152 [2024-07-26 01:16:45.356437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.152 [2024-07-26 01:16:45.356553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.152 [2024-07-26 01:16:45.356579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.152 [2024-07-26 01:16:45.356595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.152 [2024-07-26 01:16:45.356609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.152 [2024-07-26 01:16:45.356640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.152 qpair failed and we were unable to recover it. 00:34:15.152 [2024-07-26 01:16:45.366454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.152 [2024-07-26 01:16:45.366571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.152 [2024-07-26 01:16:45.366596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.153 [2024-07-26 01:16:45.366611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.153 [2024-07-26 01:16:45.366625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.153 [2024-07-26 01:16:45.366668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.153 qpair failed and we were unable to recover it. 00:34:15.153 [2024-07-26 01:16:45.376517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.153 [2024-07-26 01:16:45.376665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.153 [2024-07-26 01:16:45.376693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.153 [2024-07-26 01:16:45.376709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.153 [2024-07-26 01:16:45.376722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.153 [2024-07-26 01:16:45.376752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.153 qpair failed and we were unable to recover it. 00:34:15.153 [2024-07-26 01:16:45.386505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.153 [2024-07-26 01:16:45.386632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.153 [2024-07-26 01:16:45.386661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.153 [2024-07-26 01:16:45.386682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.153 [2024-07-26 01:16:45.386696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.153 [2024-07-26 01:16:45.386727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.153 qpair failed and we were unable to recover it. 00:34:15.153 [2024-07-26 01:16:45.396624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.153 [2024-07-26 01:16:45.396736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.153 [2024-07-26 01:16:45.396762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.153 [2024-07-26 01:16:45.396777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.153 [2024-07-26 01:16:45.396791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.153 [2024-07-26 01:16:45.396821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.153 qpair failed and we were unable to recover it. 00:34:15.153 [2024-07-26 01:16:45.406587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.153 [2024-07-26 01:16:45.406705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.153 [2024-07-26 01:16:45.406731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.153 [2024-07-26 01:16:45.406747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.153 [2024-07-26 01:16:45.406760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.153 [2024-07-26 01:16:45.406790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.153 qpair failed and we were unable to recover it. 00:34:15.153 [2024-07-26 01:16:45.416603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.153 [2024-07-26 01:16:45.416712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.153 [2024-07-26 01:16:45.416743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.153 [2024-07-26 01:16:45.416760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.153 [2024-07-26 01:16:45.416774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.153 [2024-07-26 01:16:45.416818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.153 qpair failed and we were unable to recover it. 00:34:15.153 [2024-07-26 01:16:45.426696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.153 [2024-07-26 01:16:45.426850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.153 [2024-07-26 01:16:45.426878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.153 [2024-07-26 01:16:45.426894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.153 [2024-07-26 01:16:45.426907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.153 [2024-07-26 01:16:45.426952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.153 qpair failed and we were unable to recover it. 00:34:15.153 [2024-07-26 01:16:45.436723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.153 [2024-07-26 01:16:45.436841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.153 [2024-07-26 01:16:45.436881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.153 [2024-07-26 01:16:45.436897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.153 [2024-07-26 01:16:45.436910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.153 [2024-07-26 01:16:45.436955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.153 qpair failed and we were unable to recover it. 00:34:15.153 [2024-07-26 01:16:45.446714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.153 [2024-07-26 01:16:45.446826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.153 [2024-07-26 01:16:45.446852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.153 [2024-07-26 01:16:45.446867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.153 [2024-07-26 01:16:45.446880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.153 [2024-07-26 01:16:45.446911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.153 qpair failed and we were unable to recover it. 00:34:15.153 [2024-07-26 01:16:45.456696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.153 [2024-07-26 01:16:45.456802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.153 [2024-07-26 01:16:45.456828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.153 [2024-07-26 01:16:45.456845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.153 [2024-07-26 01:16:45.456858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.153 [2024-07-26 01:16:45.456894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.153 qpair failed and we were unable to recover it. 00:34:15.153 [2024-07-26 01:16:45.466719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.153 [2024-07-26 01:16:45.466825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.153 [2024-07-26 01:16:45.466850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.153 [2024-07-26 01:16:45.466865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.153 [2024-07-26 01:16:45.466878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.153 [2024-07-26 01:16:45.466908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.153 qpair failed and we were unable to recover it. 00:34:15.153 [2024-07-26 01:16:45.476742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.153 [2024-07-26 01:16:45.476857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.153 [2024-07-26 01:16:45.476883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.153 [2024-07-26 01:16:45.476898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.153 [2024-07-26 01:16:45.476911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.153 [2024-07-26 01:16:45.476941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.154 qpair failed and we were unable to recover it. 00:34:15.154 [2024-07-26 01:16:45.486795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.154 [2024-07-26 01:16:45.486912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.154 [2024-07-26 01:16:45.486937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.154 [2024-07-26 01:16:45.486953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.154 [2024-07-26 01:16:45.486966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.154 [2024-07-26 01:16:45.486997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.154 qpair failed and we were unable to recover it. 00:34:15.154 [2024-07-26 01:16:45.496821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.154 [2024-07-26 01:16:45.496928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.154 [2024-07-26 01:16:45.496953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.154 [2024-07-26 01:16:45.496968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.154 [2024-07-26 01:16:45.496982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.154 [2024-07-26 01:16:45.497012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.154 qpair failed and we were unable to recover it. 00:34:15.154 [2024-07-26 01:16:45.506858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.154 [2024-07-26 01:16:45.506971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.154 [2024-07-26 01:16:45.507003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.154 [2024-07-26 01:16:45.507023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.154 [2024-07-26 01:16:45.507038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.154 [2024-07-26 01:16:45.507076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.154 qpair failed and we were unable to recover it. 00:34:15.154 [2024-07-26 01:16:45.516869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.154 [2024-07-26 01:16:45.516988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.154 [2024-07-26 01:16:45.517014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.154 [2024-07-26 01:16:45.517030] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.154 [2024-07-26 01:16:45.517044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.154 [2024-07-26 01:16:45.517082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.154 qpair failed and we were unable to recover it. 00:34:15.154 [2024-07-26 01:16:45.526909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.154 [2024-07-26 01:16:45.527023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.154 [2024-07-26 01:16:45.527049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.154 [2024-07-26 01:16:45.527072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.154 [2024-07-26 01:16:45.527087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.154 [2024-07-26 01:16:45.527118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.154 qpair failed and we were unable to recover it. 00:34:15.154 [2024-07-26 01:16:45.536947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.154 [2024-07-26 01:16:45.537075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.154 [2024-07-26 01:16:45.537102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.154 [2024-07-26 01:16:45.537117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.154 [2024-07-26 01:16:45.537131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.154 [2024-07-26 01:16:45.537162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.154 qpair failed and we were unable to recover it. 00:34:15.154 [2024-07-26 01:16:45.546957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.154 [2024-07-26 01:16:45.547141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.154 [2024-07-26 01:16:45.547181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.154 [2024-07-26 01:16:45.547197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.154 [2024-07-26 01:16:45.547216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.154 [2024-07-26 01:16:45.547247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.154 qpair failed and we were unable to recover it. 00:34:15.154 [2024-07-26 01:16:45.556966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.154 [2024-07-26 01:16:45.557087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.154 [2024-07-26 01:16:45.557112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.154 [2024-07-26 01:16:45.557127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.154 [2024-07-26 01:16:45.557140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.154 [2024-07-26 01:16:45.557171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.154 qpair failed and we were unable to recover it. 00:34:15.413 [2024-07-26 01:16:45.567036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.413 [2024-07-26 01:16:45.567161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.413 [2024-07-26 01:16:45.567191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.413 [2024-07-26 01:16:45.567209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.413 [2024-07-26 01:16:45.567225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.413 [2024-07-26 01:16:45.567256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.413 qpair failed and we were unable to recover it. 00:34:15.413 [2024-07-26 01:16:45.577046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.413 [2024-07-26 01:16:45.577163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.413 [2024-07-26 01:16:45.577189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.413 [2024-07-26 01:16:45.577205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.413 [2024-07-26 01:16:45.577218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.413 [2024-07-26 01:16:45.577250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.413 qpair failed and we were unable to recover it. 00:34:15.413 [2024-07-26 01:16:45.587128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.413 [2024-07-26 01:16:45.587295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.413 [2024-07-26 01:16:45.587323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.413 [2024-07-26 01:16:45.587339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.413 [2024-07-26 01:16:45.587353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.413 [2024-07-26 01:16:45.587398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.413 qpair failed and we were unable to recover it. 00:34:15.413 [2024-07-26 01:16:45.597123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.413 [2024-07-26 01:16:45.597248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.413 [2024-07-26 01:16:45.597274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.413 [2024-07-26 01:16:45.597289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.413 [2024-07-26 01:16:45.597303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.413 [2024-07-26 01:16:45.597334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.413 qpair failed and we were unable to recover it. 00:34:15.413 [2024-07-26 01:16:45.607135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.413 [2024-07-26 01:16:45.607248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.413 [2024-07-26 01:16:45.607274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.413 [2024-07-26 01:16:45.607290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.413 [2024-07-26 01:16:45.607304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.413 [2024-07-26 01:16:45.607335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.413 qpair failed and we were unable to recover it. 00:34:15.413 [2024-07-26 01:16:45.617149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.413 [2024-07-26 01:16:45.617256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.413 [2024-07-26 01:16:45.617283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.414 [2024-07-26 01:16:45.617298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.414 [2024-07-26 01:16:45.617312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.414 [2024-07-26 01:16:45.617343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.414 qpair failed and we were unable to recover it. 00:34:15.414 [2024-07-26 01:16:45.627165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.414 [2024-07-26 01:16:45.627276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.414 [2024-07-26 01:16:45.627302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.414 [2024-07-26 01:16:45.627317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.414 [2024-07-26 01:16:45.627331] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.414 [2024-07-26 01:16:45.627360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.414 qpair failed and we were unable to recover it. 00:34:15.414 [2024-07-26 01:16:45.637242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.414 [2024-07-26 01:16:45.637355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.414 [2024-07-26 01:16:45.637381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.414 [2024-07-26 01:16:45.637396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.414 [2024-07-26 01:16:45.637415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.414 [2024-07-26 01:16:45.637446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.414 qpair failed and we were unable to recover it. 00:34:15.414 [2024-07-26 01:16:45.647272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.414 [2024-07-26 01:16:45.647384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.414 [2024-07-26 01:16:45.647409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.414 [2024-07-26 01:16:45.647424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.414 [2024-07-26 01:16:45.647438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.414 [2024-07-26 01:16:45.647469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.414 qpair failed and we were unable to recover it. 00:34:15.414 [2024-07-26 01:16:45.657252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.414 [2024-07-26 01:16:45.657362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.414 [2024-07-26 01:16:45.657388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.414 [2024-07-26 01:16:45.657403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.414 [2024-07-26 01:16:45.657417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.414 [2024-07-26 01:16:45.657447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.414 qpair failed and we were unable to recover it. 00:34:15.414 [2024-07-26 01:16:45.667273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.414 [2024-07-26 01:16:45.667392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.414 [2024-07-26 01:16:45.667420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.414 [2024-07-26 01:16:45.667436] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.414 [2024-07-26 01:16:45.667449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.414 [2024-07-26 01:16:45.667479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.414 qpair failed and we were unable to recover it. 00:34:15.414 [2024-07-26 01:16:45.677336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.414 [2024-07-26 01:16:45.677456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.414 [2024-07-26 01:16:45.677481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.414 [2024-07-26 01:16:45.677497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.414 [2024-07-26 01:16:45.677511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.414 [2024-07-26 01:16:45.677541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.414 qpair failed and we were unable to recover it. 00:34:15.414 [2024-07-26 01:16:45.687331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.414 [2024-07-26 01:16:45.687441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.414 [2024-07-26 01:16:45.687467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.414 [2024-07-26 01:16:45.687482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.414 [2024-07-26 01:16:45.687496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.414 [2024-07-26 01:16:45.687527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.414 qpair failed and we were unable to recover it. 00:34:15.414 [2024-07-26 01:16:45.697379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.414 [2024-07-26 01:16:45.697504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.414 [2024-07-26 01:16:45.697532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.414 [2024-07-26 01:16:45.697547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.414 [2024-07-26 01:16:45.697561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.414 [2024-07-26 01:16:45.697591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.414 qpair failed and we were unable to recover it. 00:34:15.414 [2024-07-26 01:16:45.707411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.414 [2024-07-26 01:16:45.707520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.414 [2024-07-26 01:16:45.707545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.414 [2024-07-26 01:16:45.707560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.414 [2024-07-26 01:16:45.707574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.414 [2024-07-26 01:16:45.707604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.414 qpair failed and we were unable to recover it. 00:34:15.414 [2024-07-26 01:16:45.717422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.414 [2024-07-26 01:16:45.717535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.414 [2024-07-26 01:16:45.717560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.414 [2024-07-26 01:16:45.717575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.414 [2024-07-26 01:16:45.717588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.414 [2024-07-26 01:16:45.717619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.414 qpair failed and we were unable to recover it. 00:34:15.414 [2024-07-26 01:16:45.727501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.414 [2024-07-26 01:16:45.727612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.414 [2024-07-26 01:16:45.727638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.414 [2024-07-26 01:16:45.727658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.414 [2024-07-26 01:16:45.727673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.414 [2024-07-26 01:16:45.727704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.414 qpair failed and we were unable to recover it. 00:34:15.414 [2024-07-26 01:16:45.737497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.414 [2024-07-26 01:16:45.737612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.414 [2024-07-26 01:16:45.737638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.414 [2024-07-26 01:16:45.737653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.414 [2024-07-26 01:16:45.737667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.415 [2024-07-26 01:16:45.737697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-26 01:16:45.747515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.415 [2024-07-26 01:16:45.747631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.415 [2024-07-26 01:16:45.747657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.415 [2024-07-26 01:16:45.747672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.415 [2024-07-26 01:16:45.747686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.415 [2024-07-26 01:16:45.747717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-26 01:16:45.757565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.415 [2024-07-26 01:16:45.757684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.415 [2024-07-26 01:16:45.757710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.415 [2024-07-26 01:16:45.757727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.415 [2024-07-26 01:16:45.757740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.415 [2024-07-26 01:16:45.757771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-26 01:16:45.767625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.415 [2024-07-26 01:16:45.767783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.415 [2024-07-26 01:16:45.767811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.415 [2024-07-26 01:16:45.767827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.415 [2024-07-26 01:16:45.767840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.415 [2024-07-26 01:16:45.767870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-26 01:16:45.777620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.415 [2024-07-26 01:16:45.777751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.415 [2024-07-26 01:16:45.777779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.415 [2024-07-26 01:16:45.777795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.415 [2024-07-26 01:16:45.777808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.415 [2024-07-26 01:16:45.777838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-26 01:16:45.787640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.415 [2024-07-26 01:16:45.787792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.415 [2024-07-26 01:16:45.787819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.415 [2024-07-26 01:16:45.787836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.415 [2024-07-26 01:16:45.787849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.415 [2024-07-26 01:16:45.787893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-26 01:16:45.797654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.415 [2024-07-26 01:16:45.797767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.415 [2024-07-26 01:16:45.797792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.415 [2024-07-26 01:16:45.797807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.415 [2024-07-26 01:16:45.797820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.415 [2024-07-26 01:16:45.797851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-26 01:16:45.807680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.415 [2024-07-26 01:16:45.807788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.415 [2024-07-26 01:16:45.807813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.415 [2024-07-26 01:16:45.807829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.415 [2024-07-26 01:16:45.807842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.415 [2024-07-26 01:16:45.807872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-26 01:16:45.817720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.415 [2024-07-26 01:16:45.817827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.415 [2024-07-26 01:16:45.817858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.415 [2024-07-26 01:16:45.817874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.415 [2024-07-26 01:16:45.817887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.415 [2024-07-26 01:16:45.817918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-26 01:16:45.827734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.415 [2024-07-26 01:16:45.827846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.415 [2024-07-26 01:16:45.827872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.415 [2024-07-26 01:16:45.827887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.415 [2024-07-26 01:16:45.827900] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.415 [2024-07-26 01:16:45.827931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.415 [2024-07-26 01:16:45.837795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.415 [2024-07-26 01:16:45.837909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.415 [2024-07-26 01:16:45.837934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.415 [2024-07-26 01:16:45.837950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.415 [2024-07-26 01:16:45.837963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.415 [2024-07-26 01:16:45.838006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.415 qpair failed and we were unable to recover it. 00:34:15.674 [2024-07-26 01:16:45.847777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.674 [2024-07-26 01:16:45.847887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.674 [2024-07-26 01:16:45.847913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.674 [2024-07-26 01:16:45.847928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.674 [2024-07-26 01:16:45.847942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.674 [2024-07-26 01:16:45.847971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.674 qpair failed and we were unable to recover it. 00:34:15.674 [2024-07-26 01:16:45.857847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.674 [2024-07-26 01:16:45.857965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.674 [2024-07-26 01:16:45.857991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.674 [2024-07-26 01:16:45.858007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.674 [2024-07-26 01:16:45.858020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.674 [2024-07-26 01:16:45.858055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.674 qpair failed and we were unable to recover it. 00:34:15.674 [2024-07-26 01:16:45.867872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.674 [2024-07-26 01:16:45.867978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.674 [2024-07-26 01:16:45.868003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.674 [2024-07-26 01:16:45.868018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.674 [2024-07-26 01:16:45.868031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.674 [2024-07-26 01:16:45.868068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.674 qpair failed and we were unable to recover it. 00:34:15.674 [2024-07-26 01:16:45.877895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.674 [2024-07-26 01:16:45.878035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.674 [2024-07-26 01:16:45.878069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.674 [2024-07-26 01:16:45.878086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.674 [2024-07-26 01:16:45.878100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.674 [2024-07-26 01:16:45.878144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.674 qpair failed and we were unable to recover it. 00:34:15.674 [2024-07-26 01:16:45.887927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.674 [2024-07-26 01:16:45.888080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.674 [2024-07-26 01:16:45.888110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.674 [2024-07-26 01:16:45.888138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.674 [2024-07-26 01:16:45.888152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.674 [2024-07-26 01:16:45.888183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.674 qpair failed and we were unable to recover it. 00:34:15.674 [2024-07-26 01:16:45.897945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.674 [2024-07-26 01:16:45.898057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.674 [2024-07-26 01:16:45.898092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.674 [2024-07-26 01:16:45.898108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.674 [2024-07-26 01:16:45.898121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.674 [2024-07-26 01:16:45.898152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.674 qpair failed and we were unable to recover it. 00:34:15.674 [2024-07-26 01:16:45.907976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.674 [2024-07-26 01:16:45.908088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.674 [2024-07-26 01:16:45.908118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.674 [2024-07-26 01:16:45.908134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.674 [2024-07-26 01:16:45.908148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.674 [2024-07-26 01:16:45.908191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.675 qpair failed and we were unable to recover it. 00:34:15.675 [2024-07-26 01:16:45.917993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.675 [2024-07-26 01:16:45.918120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.675 [2024-07-26 01:16:45.918148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.675 [2024-07-26 01:16:45.918164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.675 [2024-07-26 01:16:45.918177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.675 [2024-07-26 01:16:45.918208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.675 qpair failed and we were unable to recover it. 00:34:15.675 [2024-07-26 01:16:45.928029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.675 [2024-07-26 01:16:45.928158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.675 [2024-07-26 01:16:45.928186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.675 [2024-07-26 01:16:45.928201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.675 [2024-07-26 01:16:45.928215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.675 [2024-07-26 01:16:45.928247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.675 qpair failed and we were unable to recover it. 00:34:15.675 [2024-07-26 01:16:45.938066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.675 [2024-07-26 01:16:45.938172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.675 [2024-07-26 01:16:45.938200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.675 [2024-07-26 01:16:45.938215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.675 [2024-07-26 01:16:45.938228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.675 [2024-07-26 01:16:45.938258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.675 qpair failed and we were unable to recover it. 00:34:15.675 [2024-07-26 01:16:45.948127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.675 [2024-07-26 01:16:45.948259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.675 [2024-07-26 01:16:45.948286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.675 [2024-07-26 01:16:45.948302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.675 [2024-07-26 01:16:45.948315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.675 [2024-07-26 01:16:45.948369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.675 qpair failed and we were unable to recover it. 00:34:15.675 [2024-07-26 01:16:45.958126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.675 [2024-07-26 01:16:45.958240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.675 [2024-07-26 01:16:45.958268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.675 [2024-07-26 01:16:45.958283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.675 [2024-07-26 01:16:45.958297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.675 [2024-07-26 01:16:45.958326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.675 qpair failed and we were unable to recover it. 00:34:15.675 [2024-07-26 01:16:45.968148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.675 [2024-07-26 01:16:45.968259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.675 [2024-07-26 01:16:45.968286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.675 [2024-07-26 01:16:45.968301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.675 [2024-07-26 01:16:45.968314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.675 [2024-07-26 01:16:45.968344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.675 qpair failed and we were unable to recover it. 00:34:15.675 [2024-07-26 01:16:45.978180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.675 [2024-07-26 01:16:45.978341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.675 [2024-07-26 01:16:45.978367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.675 [2024-07-26 01:16:45.978382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.675 [2024-07-26 01:16:45.978411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.675 [2024-07-26 01:16:45.978440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.675 qpair failed and we were unable to recover it. 00:34:15.675 [2024-07-26 01:16:45.988257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.675 [2024-07-26 01:16:45.988367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.675 [2024-07-26 01:16:45.988394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.675 [2024-07-26 01:16:45.988409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.675 [2024-07-26 01:16:45.988423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.675 [2024-07-26 01:16:45.988453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.675 qpair failed and we were unable to recover it. 00:34:15.675 [2024-07-26 01:16:45.998266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.675 [2024-07-26 01:16:45.998399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.675 [2024-07-26 01:16:45.998427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.675 [2024-07-26 01:16:45.998443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.675 [2024-07-26 01:16:45.998456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.675 [2024-07-26 01:16:45.998502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.675 qpair failed and we were unable to recover it. 00:34:15.675 [2024-07-26 01:16:46.008266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.675 [2024-07-26 01:16:46.008377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.675 [2024-07-26 01:16:46.008405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.675 [2024-07-26 01:16:46.008421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.675 [2024-07-26 01:16:46.008437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.675 [2024-07-26 01:16:46.008468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.675 qpair failed and we were unable to recover it. 00:34:15.675 [2024-07-26 01:16:46.018308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.675 [2024-07-26 01:16:46.018431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.675 [2024-07-26 01:16:46.018458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.675 [2024-07-26 01:16:46.018474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.675 [2024-07-26 01:16:46.018488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.675 [2024-07-26 01:16:46.018519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.675 qpair failed and we were unable to recover it. 00:34:15.675 [2024-07-26 01:16:46.028366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.675 [2024-07-26 01:16:46.028487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.675 [2024-07-26 01:16:46.028513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.675 [2024-07-26 01:16:46.028528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.675 [2024-07-26 01:16:46.028543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.675 [2024-07-26 01:16:46.028572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.675 qpair failed and we were unable to recover it. 00:34:15.676 [2024-07-26 01:16:46.038342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.676 [2024-07-26 01:16:46.038450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.676 [2024-07-26 01:16:46.038477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.676 [2024-07-26 01:16:46.038491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.676 [2024-07-26 01:16:46.038511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.676 [2024-07-26 01:16:46.038541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.676 qpair failed and we were unable to recover it. 00:34:15.676 [2024-07-26 01:16:46.048386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.676 [2024-07-26 01:16:46.048508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.676 [2024-07-26 01:16:46.048535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.676 [2024-07-26 01:16:46.048550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.676 [2024-07-26 01:16:46.048563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.676 [2024-07-26 01:16:46.048593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.676 qpair failed and we were unable to recover it. 00:34:15.676 [2024-07-26 01:16:46.058426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.676 [2024-07-26 01:16:46.058545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.676 [2024-07-26 01:16:46.058572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.676 [2024-07-26 01:16:46.058587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.676 [2024-07-26 01:16:46.058600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.676 [2024-07-26 01:16:46.058630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.676 qpair failed and we were unable to recover it. 00:34:15.676 [2024-07-26 01:16:46.068416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.676 [2024-07-26 01:16:46.068520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.676 [2024-07-26 01:16:46.068548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.676 [2024-07-26 01:16:46.068564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.676 [2024-07-26 01:16:46.068578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.676 [2024-07-26 01:16:46.068608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.676 qpair failed and we were unable to recover it. 00:34:15.676 [2024-07-26 01:16:46.078493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.676 [2024-07-26 01:16:46.078611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.676 [2024-07-26 01:16:46.078642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.676 [2024-07-26 01:16:46.078659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.676 [2024-07-26 01:16:46.078673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.676 [2024-07-26 01:16:46.078704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.676 qpair failed and we were unable to recover it. 00:34:15.676 [2024-07-26 01:16:46.088625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.676 [2024-07-26 01:16:46.088744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.676 [2024-07-26 01:16:46.088772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.676 [2024-07-26 01:16:46.088787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.676 [2024-07-26 01:16:46.088802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.676 [2024-07-26 01:16:46.088833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.676 qpair failed and we were unable to recover it. 00:34:15.676 [2024-07-26 01:16:46.098573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.676 [2024-07-26 01:16:46.098681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.676 [2024-07-26 01:16:46.098709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.676 [2024-07-26 01:16:46.098724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.676 [2024-07-26 01:16:46.098738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.676 [2024-07-26 01:16:46.098769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.676 qpair failed and we were unable to recover it. 00:34:15.935 [2024-07-26 01:16:46.108615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.935 [2024-07-26 01:16:46.108728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.935 [2024-07-26 01:16:46.108755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.935 [2024-07-26 01:16:46.108771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.935 [2024-07-26 01:16:46.108784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.935 [2024-07-26 01:16:46.108815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.935 qpair failed and we were unable to recover it. 00:34:15.935 [2024-07-26 01:16:46.118673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.935 [2024-07-26 01:16:46.118791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.935 [2024-07-26 01:16:46.118818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.935 [2024-07-26 01:16:46.118833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.935 [2024-07-26 01:16:46.118846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.935 [2024-07-26 01:16:46.118877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.935 qpair failed and we were unable to recover it. 00:34:15.935 [2024-07-26 01:16:46.128638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.935 [2024-07-26 01:16:46.128800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.935 [2024-07-26 01:16:46.128829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.935 [2024-07-26 01:16:46.128850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.935 [2024-07-26 01:16:46.128879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.935 [2024-07-26 01:16:46.128909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.935 qpair failed and we were unable to recover it. 00:34:15.935 [2024-07-26 01:16:46.138716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.935 [2024-07-26 01:16:46.138865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.935 [2024-07-26 01:16:46.138892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.935 [2024-07-26 01:16:46.138908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.935 [2024-07-26 01:16:46.138921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.935 [2024-07-26 01:16:46.138979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.935 qpair failed and we were unable to recover it. 00:34:15.935 [2024-07-26 01:16:46.148649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.935 [2024-07-26 01:16:46.148753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.935 [2024-07-26 01:16:46.148780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.935 [2024-07-26 01:16:46.148795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.935 [2024-07-26 01:16:46.148810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.935 [2024-07-26 01:16:46.148840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.935 qpair failed and we were unable to recover it. 00:34:15.935 [2024-07-26 01:16:46.158700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.936 [2024-07-26 01:16:46.158820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.936 [2024-07-26 01:16:46.158847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.936 [2024-07-26 01:16:46.158862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.936 [2024-07-26 01:16:46.158877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.936 [2024-07-26 01:16:46.158907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.936 qpair failed and we were unable to recover it. 00:34:15.936 [2024-07-26 01:16:46.168837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.936 [2024-07-26 01:16:46.168986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.936 [2024-07-26 01:16:46.169027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.936 [2024-07-26 01:16:46.169041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.936 [2024-07-26 01:16:46.169054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.936 [2024-07-26 01:16:46.169109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.936 qpair failed and we were unable to recover it. 00:34:15.936 [2024-07-26 01:16:46.178761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.936 [2024-07-26 01:16:46.178885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.936 [2024-07-26 01:16:46.178911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.936 [2024-07-26 01:16:46.178927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.936 [2024-07-26 01:16:46.178942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.936 [2024-07-26 01:16:46.178972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.936 qpair failed and we were unable to recover it. 00:34:15.936 [2024-07-26 01:16:46.188766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.936 [2024-07-26 01:16:46.188889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.936 [2024-07-26 01:16:46.188916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.936 [2024-07-26 01:16:46.188931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.936 [2024-07-26 01:16:46.188945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.936 [2024-07-26 01:16:46.188975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.936 qpair failed and we were unable to recover it. 00:34:15.936 [2024-07-26 01:16:46.198855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.936 [2024-07-26 01:16:46.199005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.936 [2024-07-26 01:16:46.199032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.936 [2024-07-26 01:16:46.199047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.936 [2024-07-26 01:16:46.199083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.936 [2024-07-26 01:16:46.199143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.936 qpair failed and we were unable to recover it. 00:34:15.936 [2024-07-26 01:16:46.208826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.936 [2024-07-26 01:16:46.208934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.936 [2024-07-26 01:16:46.208961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.936 [2024-07-26 01:16:46.208976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.936 [2024-07-26 01:16:46.208991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.936 [2024-07-26 01:16:46.209021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.936 qpair failed and we were unable to recover it. 00:34:15.936 [2024-07-26 01:16:46.218894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.936 [2024-07-26 01:16:46.219046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.936 [2024-07-26 01:16:46.219095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.936 [2024-07-26 01:16:46.219113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.936 [2024-07-26 01:16:46.219127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.936 [2024-07-26 01:16:46.219171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.936 qpair failed and we were unable to recover it. 00:34:15.936 [2024-07-26 01:16:46.228918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.936 [2024-07-26 01:16:46.229021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.936 [2024-07-26 01:16:46.229048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.936 [2024-07-26 01:16:46.229070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.936 [2024-07-26 01:16:46.229084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.936 [2024-07-26 01:16:46.229116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.936 qpair failed and we were unable to recover it. 00:34:15.936 [2024-07-26 01:16:46.238942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.936 [2024-07-26 01:16:46.239054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.936 [2024-07-26 01:16:46.239087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.936 [2024-07-26 01:16:46.239103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.936 [2024-07-26 01:16:46.239116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.936 [2024-07-26 01:16:46.239147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.936 qpair failed and we were unable to recover it. 00:34:15.936 [2024-07-26 01:16:46.248949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.936 [2024-07-26 01:16:46.249055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.936 [2024-07-26 01:16:46.249089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.936 [2024-07-26 01:16:46.249105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.936 [2024-07-26 01:16:46.249120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.936 [2024-07-26 01:16:46.249163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.936 qpair failed and we were unable to recover it. 00:34:15.936 [2024-07-26 01:16:46.258985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.936 [2024-07-26 01:16:46.259113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.936 [2024-07-26 01:16:46.259141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.936 [2024-07-26 01:16:46.259156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.936 [2024-07-26 01:16:46.259169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.936 [2024-07-26 01:16:46.259205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.936 qpair failed and we were unable to recover it. 00:34:15.936 [2024-07-26 01:16:46.269033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.936 [2024-07-26 01:16:46.269145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.936 [2024-07-26 01:16:46.269172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.936 [2024-07-26 01:16:46.269188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.936 [2024-07-26 01:16:46.269202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.936 [2024-07-26 01:16:46.269232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.936 qpair failed and we were unable to recover it. 00:34:15.936 [2024-07-26 01:16:46.279041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.936 [2024-07-26 01:16:46.279162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.936 [2024-07-26 01:16:46.279190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.936 [2024-07-26 01:16:46.279206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.936 [2024-07-26 01:16:46.279220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.936 [2024-07-26 01:16:46.279264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.936 qpair failed and we were unable to recover it. 00:34:15.936 [2024-07-26 01:16:46.289086] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.936 [2024-07-26 01:16:46.289254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.936 [2024-07-26 01:16:46.289282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.937 [2024-07-26 01:16:46.289298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.937 [2024-07-26 01:16:46.289316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.937 [2024-07-26 01:16:46.289348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.937 qpair failed and we were unable to recover it. 00:34:15.937 [2024-07-26 01:16:46.299110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.937 [2024-07-26 01:16:46.299269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.937 [2024-07-26 01:16:46.299296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.937 [2024-07-26 01:16:46.299314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.937 [2024-07-26 01:16:46.299330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.937 [2024-07-26 01:16:46.299390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.937 qpair failed and we were unable to recover it. 00:34:15.937 [2024-07-26 01:16:46.309157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.937 [2024-07-26 01:16:46.309301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.937 [2024-07-26 01:16:46.309333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.937 [2024-07-26 01:16:46.309349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.937 [2024-07-26 01:16:46.309378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.937 [2024-07-26 01:16:46.309407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.937 qpair failed and we were unable to recover it. 00:34:15.937 [2024-07-26 01:16:46.319174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.937 [2024-07-26 01:16:46.319319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.937 [2024-07-26 01:16:46.319346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.937 [2024-07-26 01:16:46.319362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.937 [2024-07-26 01:16:46.319375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.937 [2024-07-26 01:16:46.319406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.937 qpair failed and we were unable to recover it. 00:34:15.937 [2024-07-26 01:16:46.329180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.937 [2024-07-26 01:16:46.329325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.937 [2024-07-26 01:16:46.329352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.937 [2024-07-26 01:16:46.329368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.937 [2024-07-26 01:16:46.329382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.937 [2024-07-26 01:16:46.329428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.937 qpair failed and we were unable to recover it. 00:34:15.937 [2024-07-26 01:16:46.339214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.937 [2024-07-26 01:16:46.339345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.937 [2024-07-26 01:16:46.339372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.937 [2024-07-26 01:16:46.339388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.937 [2024-07-26 01:16:46.339406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.937 [2024-07-26 01:16:46.339450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.937 qpair failed and we were unable to recover it. 00:34:15.937 [2024-07-26 01:16:46.349267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.937 [2024-07-26 01:16:46.349378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.937 [2024-07-26 01:16:46.349405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.937 [2024-07-26 01:16:46.349420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.937 [2024-07-26 01:16:46.349435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.937 [2024-07-26 01:16:46.349471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.937 qpair failed and we were unable to recover it. 00:34:15.937 [2024-07-26 01:16:46.359296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.937 [2024-07-26 01:16:46.359423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.937 [2024-07-26 01:16:46.359450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.937 [2024-07-26 01:16:46.359465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.937 [2024-07-26 01:16:46.359480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:15.937 [2024-07-26 01:16:46.359510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.937 qpair failed and we were unable to recover it. 00:34:16.196 [2024-07-26 01:16:46.369307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.196 [2024-07-26 01:16:46.369422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.196 [2024-07-26 01:16:46.369449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.196 [2024-07-26 01:16:46.369465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.196 [2024-07-26 01:16:46.369479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.196 [2024-07-26 01:16:46.369509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.196 qpair failed and we were unable to recover it. 00:34:16.196 [2024-07-26 01:16:46.379314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.196 [2024-07-26 01:16:46.379425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.196 [2024-07-26 01:16:46.379452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.196 [2024-07-26 01:16:46.379468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.196 [2024-07-26 01:16:46.379483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.196 [2024-07-26 01:16:46.379513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.196 qpair failed and we were unable to recover it. 00:34:16.196 [2024-07-26 01:16:46.389364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.196 [2024-07-26 01:16:46.389489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.196 [2024-07-26 01:16:46.389516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.196 [2024-07-26 01:16:46.389532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.196 [2024-07-26 01:16:46.389546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.196 [2024-07-26 01:16:46.389576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.196 qpair failed and we were unable to recover it. 00:34:16.196 [2024-07-26 01:16:46.399375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.196 [2024-07-26 01:16:46.399490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.196 [2024-07-26 01:16:46.399521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.196 [2024-07-26 01:16:46.399537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.196 [2024-07-26 01:16:46.399551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.196 [2024-07-26 01:16:46.399583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.196 qpair failed and we were unable to recover it. 00:34:16.196 [2024-07-26 01:16:46.409378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.196 [2024-07-26 01:16:46.409482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.196 [2024-07-26 01:16:46.409508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.196 [2024-07-26 01:16:46.409524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.196 [2024-07-26 01:16:46.409539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.196 [2024-07-26 01:16:46.409569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.196 qpair failed and we were unable to recover it. 00:34:16.196 [2024-07-26 01:16:46.419477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.196 [2024-07-26 01:16:46.419588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.196 [2024-07-26 01:16:46.419615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.196 [2024-07-26 01:16:46.419630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.196 [2024-07-26 01:16:46.419643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.196 [2024-07-26 01:16:46.419674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.196 qpair failed and we were unable to recover it. 00:34:16.196 [2024-07-26 01:16:46.429532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.196 [2024-07-26 01:16:46.429697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.196 [2024-07-26 01:16:46.429738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.196 [2024-07-26 01:16:46.429754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.196 [2024-07-26 01:16:46.429767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.196 [2024-07-26 01:16:46.429806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.196 qpair failed and we were unable to recover it. 00:34:16.196 [2024-07-26 01:16:46.439516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.196 [2024-07-26 01:16:46.439637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.196 [2024-07-26 01:16:46.439664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.196 [2024-07-26 01:16:46.439679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.196 [2024-07-26 01:16:46.439699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.196 [2024-07-26 01:16:46.439744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.196 qpair failed and we were unable to recover it. 00:34:16.196 [2024-07-26 01:16:46.449556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.196 [2024-07-26 01:16:46.449677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.196 [2024-07-26 01:16:46.449704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.196 [2024-07-26 01:16:46.449720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.196 [2024-07-26 01:16:46.449734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.196 [2024-07-26 01:16:46.449765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.196 qpair failed and we were unable to recover it. 00:34:16.196 [2024-07-26 01:16:46.459578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.196 [2024-07-26 01:16:46.459739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.196 [2024-07-26 01:16:46.459766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.196 [2024-07-26 01:16:46.459782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.196 [2024-07-26 01:16:46.459796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.196 [2024-07-26 01:16:46.459842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.196 qpair failed and we were unable to recover it. 00:34:16.196 [2024-07-26 01:16:46.469542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.197 [2024-07-26 01:16:46.469660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.197 [2024-07-26 01:16:46.469687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.197 [2024-07-26 01:16:46.469701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.197 [2024-07-26 01:16:46.469716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.197 [2024-07-26 01:16:46.469746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-07-26 01:16:46.479594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.197 [2024-07-26 01:16:46.479758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.197 [2024-07-26 01:16:46.479784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.197 [2024-07-26 01:16:46.479799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.197 [2024-07-26 01:16:46.479813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.197 [2024-07-26 01:16:46.479858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-07-26 01:16:46.489636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.197 [2024-07-26 01:16:46.489758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.197 [2024-07-26 01:16:46.489784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.197 [2024-07-26 01:16:46.489800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.197 [2024-07-26 01:16:46.489813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.197 [2024-07-26 01:16:46.489843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-07-26 01:16:46.499664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.197 [2024-07-26 01:16:46.499779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.197 [2024-07-26 01:16:46.499806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.197 [2024-07-26 01:16:46.499821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.197 [2024-07-26 01:16:46.499835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.197 [2024-07-26 01:16:46.499865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-07-26 01:16:46.509722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.197 [2024-07-26 01:16:46.509875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.197 [2024-07-26 01:16:46.509903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.197 [2024-07-26 01:16:46.509919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.197 [2024-07-26 01:16:46.509949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.197 [2024-07-26 01:16:46.509978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-07-26 01:16:46.519705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.197 [2024-07-26 01:16:46.519823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.197 [2024-07-26 01:16:46.519851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.197 [2024-07-26 01:16:46.519867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.197 [2024-07-26 01:16:46.519881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.197 [2024-07-26 01:16:46.519911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-07-26 01:16:46.529746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.197 [2024-07-26 01:16:46.529886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.197 [2024-07-26 01:16:46.529913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.197 [2024-07-26 01:16:46.529934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.197 [2024-07-26 01:16:46.529948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.197 [2024-07-26 01:16:46.529979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-07-26 01:16:46.539764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.197 [2024-07-26 01:16:46.539871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.197 [2024-07-26 01:16:46.539897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.197 [2024-07-26 01:16:46.539913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.197 [2024-07-26 01:16:46.539927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.197 [2024-07-26 01:16:46.539956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-07-26 01:16:46.549825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.197 [2024-07-26 01:16:46.549978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.197 [2024-07-26 01:16:46.550004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.197 [2024-07-26 01:16:46.550019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.197 [2024-07-26 01:16:46.550049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.197 [2024-07-26 01:16:46.550086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-07-26 01:16:46.559854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.197 [2024-07-26 01:16:46.559974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.197 [2024-07-26 01:16:46.560001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.197 [2024-07-26 01:16:46.560016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.197 [2024-07-26 01:16:46.560030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.197 [2024-07-26 01:16:46.560065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-07-26 01:16:46.569851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.197 [2024-07-26 01:16:46.569985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.197 [2024-07-26 01:16:46.570011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.197 [2024-07-26 01:16:46.570026] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.197 [2024-07-26 01:16:46.570041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.197 [2024-07-26 01:16:46.570078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-07-26 01:16:46.579873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.197 [2024-07-26 01:16:46.579999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.197 [2024-07-26 01:16:46.580025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.197 [2024-07-26 01:16:46.580040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.197 [2024-07-26 01:16:46.580054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.197 [2024-07-26 01:16:46.580094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.197 qpair failed and we were unable to recover it. 00:34:16.197 [2024-07-26 01:16:46.589887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.197 [2024-07-26 01:16:46.589996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.197 [2024-07-26 01:16:46.590022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.197 [2024-07-26 01:16:46.590037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.197 [2024-07-26 01:16:46.590050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.197 [2024-07-26 01:16:46.590092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.198 qpair failed and we were unable to recover it. 00:34:16.198 [2024-07-26 01:16:46.599921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.198 [2024-07-26 01:16:46.600041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.198 [2024-07-26 01:16:46.600074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.198 [2024-07-26 01:16:46.600090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.198 [2024-07-26 01:16:46.600104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.198 [2024-07-26 01:16:46.600134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.198 qpair failed and we were unable to recover it. 00:34:16.198 [2024-07-26 01:16:46.609984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.198 [2024-07-26 01:16:46.610110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.198 [2024-07-26 01:16:46.610136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.198 [2024-07-26 01:16:46.610151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.198 [2024-07-26 01:16:46.610165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.198 [2024-07-26 01:16:46.610196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.198 qpair failed and we were unable to recover it. 00:34:16.198 [2024-07-26 01:16:46.620021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.198 [2024-07-26 01:16:46.620134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.198 [2024-07-26 01:16:46.620161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.198 [2024-07-26 01:16:46.620182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.198 [2024-07-26 01:16:46.620197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.198 [2024-07-26 01:16:46.620228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.198 qpair failed and we were unable to recover it. 00:34:16.457 [2024-07-26 01:16:46.630008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.457 [2024-07-26 01:16:46.630124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.457 [2024-07-26 01:16:46.630151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.457 [2024-07-26 01:16:46.630167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.457 [2024-07-26 01:16:46.630182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.457 [2024-07-26 01:16:46.630224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.457 qpair failed and we were unable to recover it. 00:34:16.457 [2024-07-26 01:16:46.640110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.457 [2024-07-26 01:16:46.640227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.457 [2024-07-26 01:16:46.640257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.457 [2024-07-26 01:16:46.640276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.457 [2024-07-26 01:16:46.640290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.457 [2024-07-26 01:16:46.640322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.457 qpair failed and we were unable to recover it. 00:34:16.457 [2024-07-26 01:16:46.650084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.457 [2024-07-26 01:16:46.650204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.457 [2024-07-26 01:16:46.650232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.457 [2024-07-26 01:16:46.650247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.457 [2024-07-26 01:16:46.650261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.457 [2024-07-26 01:16:46.650291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.457 qpair failed and we were unable to recover it. 00:34:16.457 [2024-07-26 01:16:46.660142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.457 [2024-07-26 01:16:46.660298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.457 [2024-07-26 01:16:46.660325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.457 [2024-07-26 01:16:46.660340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.457 [2024-07-26 01:16:46.660354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.457 [2024-07-26 01:16:46.660383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.457 qpair failed and we were unable to recover it. 00:34:16.457 [2024-07-26 01:16:46.670200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.457 [2024-07-26 01:16:46.670339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.457 [2024-07-26 01:16:46.670365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.457 [2024-07-26 01:16:46.670380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.457 [2024-07-26 01:16:46.670395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.457 [2024-07-26 01:16:46.670426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.457 qpair failed and we were unable to recover it. 00:34:16.457 [2024-07-26 01:16:46.680160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.457 [2024-07-26 01:16:46.680279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.457 [2024-07-26 01:16:46.680306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.457 [2024-07-26 01:16:46.680321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.457 [2024-07-26 01:16:46.680335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.457 [2024-07-26 01:16:46.680364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.457 qpair failed and we were unable to recover it. 00:34:16.457 [2024-07-26 01:16:46.690158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.457 [2024-07-26 01:16:46.690278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.457 [2024-07-26 01:16:46.690305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.457 [2024-07-26 01:16:46.690320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.457 [2024-07-26 01:16:46.690333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.457 [2024-07-26 01:16:46.690363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.457 qpair failed and we were unable to recover it. 00:34:16.457 [2024-07-26 01:16:46.700213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.457 [2024-07-26 01:16:46.700335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.457 [2024-07-26 01:16:46.700361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.457 [2024-07-26 01:16:46.700377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.457 [2024-07-26 01:16:46.700391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.457 [2024-07-26 01:16:46.700420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.457 qpair failed and we were unable to recover it. 00:34:16.457 [2024-07-26 01:16:46.710223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.457 [2024-07-26 01:16:46.710334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.457 [2024-07-26 01:16:46.710368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.457 [2024-07-26 01:16:46.710384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.457 [2024-07-26 01:16:46.710398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.457 [2024-07-26 01:16:46.710429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.457 qpair failed and we were unable to recover it. 00:34:16.457 [2024-07-26 01:16:46.720347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.457 [2024-07-26 01:16:46.720466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.457 [2024-07-26 01:16:46.720499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.457 [2024-07-26 01:16:46.720515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.457 [2024-07-26 01:16:46.720544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.457 [2024-07-26 01:16:46.720573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.457 qpair failed and we were unable to recover it. 00:34:16.457 [2024-07-26 01:16:46.730323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.457 [2024-07-26 01:16:46.730474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.457 [2024-07-26 01:16:46.730501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.457 [2024-07-26 01:16:46.730532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.457 [2024-07-26 01:16:46.730546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.457 [2024-07-26 01:16:46.730576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.457 qpair failed and we were unable to recover it. 00:34:16.457 [2024-07-26 01:16:46.740327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.457 [2024-07-26 01:16:46.740441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.457 [2024-07-26 01:16:46.740468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.457 [2024-07-26 01:16:46.740483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.457 [2024-07-26 01:16:46.740497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.457 [2024-07-26 01:16:46.740540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.457 qpair failed and we were unable to recover it. 00:34:16.457 [2024-07-26 01:16:46.750356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.457 [2024-07-26 01:16:46.750466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.457 [2024-07-26 01:16:46.750496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.457 [2024-07-26 01:16:46.750513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.457 [2024-07-26 01:16:46.750527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.457 [2024-07-26 01:16:46.750563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.457 qpair failed and we were unable to recover it. 00:34:16.457 [2024-07-26 01:16:46.760409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.457 [2024-07-26 01:16:46.760554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.457 [2024-07-26 01:16:46.760580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.457 [2024-07-26 01:16:46.760612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.457 [2024-07-26 01:16:46.760626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.457 [2024-07-26 01:16:46.760656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.457 qpair failed and we were unable to recover it. 00:34:16.457 [2024-07-26 01:16:46.770441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.457 [2024-07-26 01:16:46.770590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.457 [2024-07-26 01:16:46.770617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.458 [2024-07-26 01:16:46.770632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.458 [2024-07-26 01:16:46.770646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.458 [2024-07-26 01:16:46.770675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.458 qpair failed and we were unable to recover it. 00:34:16.458 [2024-07-26 01:16:46.780433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.458 [2024-07-26 01:16:46.780549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.458 [2024-07-26 01:16:46.780575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.458 [2024-07-26 01:16:46.780591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.458 [2024-07-26 01:16:46.780605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.458 [2024-07-26 01:16:46.780634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.458 qpair failed and we were unable to recover it. 00:34:16.458 [2024-07-26 01:16:46.790504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.458 [2024-07-26 01:16:46.790618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.458 [2024-07-26 01:16:46.790644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.458 [2024-07-26 01:16:46.790660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.458 [2024-07-26 01:16:46.790674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.458 [2024-07-26 01:16:46.790705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.458 qpair failed and we were unable to recover it. 00:34:16.458 [2024-07-26 01:16:46.800574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.458 [2024-07-26 01:16:46.800744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.458 [2024-07-26 01:16:46.800775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.458 [2024-07-26 01:16:46.800807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.458 [2024-07-26 01:16:46.800820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.458 [2024-07-26 01:16:46.800851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.458 qpair failed and we were unable to recover it. 00:34:16.458 [2024-07-26 01:16:46.810511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.458 [2024-07-26 01:16:46.810628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.458 [2024-07-26 01:16:46.810655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.458 [2024-07-26 01:16:46.810671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.458 [2024-07-26 01:16:46.810684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.458 [2024-07-26 01:16:46.810714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.458 qpair failed and we were unable to recover it. 00:34:16.458 [2024-07-26 01:16:46.820590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.458 [2024-07-26 01:16:46.820697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.458 [2024-07-26 01:16:46.820724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.458 [2024-07-26 01:16:46.820739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.458 [2024-07-26 01:16:46.820754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.458 [2024-07-26 01:16:46.820783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.458 qpair failed and we were unable to recover it. 00:34:16.458 [2024-07-26 01:16:46.830595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.458 [2024-07-26 01:16:46.830726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.458 [2024-07-26 01:16:46.830753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.458 [2024-07-26 01:16:46.830768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.458 [2024-07-26 01:16:46.830782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.458 [2024-07-26 01:16:46.830811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.458 qpair failed and we were unable to recover it. 00:34:16.458 [2024-07-26 01:16:46.840645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.458 [2024-07-26 01:16:46.840796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.458 [2024-07-26 01:16:46.840825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.458 [2024-07-26 01:16:46.840842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.458 [2024-07-26 01:16:46.840878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.458 [2024-07-26 01:16:46.840911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.458 qpair failed and we were unable to recover it. 00:34:16.458 [2024-07-26 01:16:46.850638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.458 [2024-07-26 01:16:46.850765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.458 [2024-07-26 01:16:46.850793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.458 [2024-07-26 01:16:46.850809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.458 [2024-07-26 01:16:46.850823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.458 [2024-07-26 01:16:46.850853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.458 qpair failed and we were unable to recover it. 00:34:16.458 [2024-07-26 01:16:46.860704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.458 [2024-07-26 01:16:46.860815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.458 [2024-07-26 01:16:46.860842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.458 [2024-07-26 01:16:46.860858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.458 [2024-07-26 01:16:46.860870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.458 [2024-07-26 01:16:46.860899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.458 qpair failed and we were unable to recover it. 00:34:16.458 [2024-07-26 01:16:46.870707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.458 [2024-07-26 01:16:46.870824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.458 [2024-07-26 01:16:46.870851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.458 [2024-07-26 01:16:46.870866] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.458 [2024-07-26 01:16:46.870880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.458 [2024-07-26 01:16:46.870911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.458 qpair failed and we were unable to recover it. 00:34:16.458 [2024-07-26 01:16:46.880762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.458 [2024-07-26 01:16:46.880884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.458 [2024-07-26 01:16:46.880910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.458 [2024-07-26 01:16:46.880925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.458 [2024-07-26 01:16:46.880939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.458 [2024-07-26 01:16:46.880968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.458 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-26 01:16:46.890785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.717 [2024-07-26 01:16:46.890916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.717 [2024-07-26 01:16:46.890943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.717 [2024-07-26 01:16:46.890958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.717 [2024-07-26 01:16:46.890972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.717 [2024-07-26 01:16:46.891030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-26 01:16:46.900844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.717 [2024-07-26 01:16:46.900959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.717 [2024-07-26 01:16:46.900985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.717 [2024-07-26 01:16:46.901000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.717 [2024-07-26 01:16:46.901014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.717 [2024-07-26 01:16:46.901044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-26 01:16:46.910822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.717 [2024-07-26 01:16:46.910932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.717 [2024-07-26 01:16:46.910957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.717 [2024-07-26 01:16:46.910972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.717 [2024-07-26 01:16:46.910984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.717 [2024-07-26 01:16:46.911013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-26 01:16:46.920853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.717 [2024-07-26 01:16:46.920970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.717 [2024-07-26 01:16:46.920996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.717 [2024-07-26 01:16:46.921011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.717 [2024-07-26 01:16:46.921025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.717 [2024-07-26 01:16:46.921055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-26 01:16:46.930880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.717 [2024-07-26 01:16:46.930992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.717 [2024-07-26 01:16:46.931019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.717 [2024-07-26 01:16:46.931040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.717 [2024-07-26 01:16:46.931055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.717 [2024-07-26 01:16:46.931097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-26 01:16:46.940933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.717 [2024-07-26 01:16:46.941064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.717 [2024-07-26 01:16:46.941091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.717 [2024-07-26 01:16:46.941111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.717 [2024-07-26 01:16:46.941125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.717 [2024-07-26 01:16:46.941157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.717 [2024-07-26 01:16:46.950925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.717 [2024-07-26 01:16:46.951054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.717 [2024-07-26 01:16:46.951087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.717 [2024-07-26 01:16:46.951103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.717 [2024-07-26 01:16:46.951117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.717 [2024-07-26 01:16:46.951147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.717 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-26 01:16:46.960946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.718 [2024-07-26 01:16:46.961071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.718 [2024-07-26 01:16:46.961097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.718 [2024-07-26 01:16:46.961112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.718 [2024-07-26 01:16:46.961127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.718 [2024-07-26 01:16:46.961156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-26 01:16:46.970983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.718 [2024-07-26 01:16:46.971142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.718 [2024-07-26 01:16:46.971169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.718 [2024-07-26 01:16:46.971184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.718 [2024-07-26 01:16:46.971198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.718 [2024-07-26 01:16:46.971228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-26 01:16:46.981034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.718 [2024-07-26 01:16:46.981196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.718 [2024-07-26 01:16:46.981223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.718 [2024-07-26 01:16:46.981238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.718 [2024-07-26 01:16:46.981252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.718 [2024-07-26 01:16:46.981282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-26 01:16:46.991038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.718 [2024-07-26 01:16:46.991153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.718 [2024-07-26 01:16:46.991180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.718 [2024-07-26 01:16:46.991196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.718 [2024-07-26 01:16:46.991210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.718 [2024-07-26 01:16:46.991239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-26 01:16:47.001081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.718 [2024-07-26 01:16:47.001197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.718 [2024-07-26 01:16:47.001223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.718 [2024-07-26 01:16:47.001238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.718 [2024-07-26 01:16:47.001252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.718 [2024-07-26 01:16:47.001285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-26 01:16:47.011100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.718 [2024-07-26 01:16:47.011260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.718 [2024-07-26 01:16:47.011287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.718 [2024-07-26 01:16:47.011303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.718 [2024-07-26 01:16:47.011317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.718 [2024-07-26 01:16:47.011346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-26 01:16:47.021111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.718 [2024-07-26 01:16:47.021249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.718 [2024-07-26 01:16:47.021275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.718 [2024-07-26 01:16:47.021297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.718 [2024-07-26 01:16:47.021313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.718 [2024-07-26 01:16:47.021345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-26 01:16:47.031168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.718 [2024-07-26 01:16:47.031322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.718 [2024-07-26 01:16:47.031361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.718 [2024-07-26 01:16:47.031377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.718 [2024-07-26 01:16:47.031391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.718 [2024-07-26 01:16:47.031420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-26 01:16:47.041215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.718 [2024-07-26 01:16:47.041324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.718 [2024-07-26 01:16:47.041351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.718 [2024-07-26 01:16:47.041367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.718 [2024-07-26 01:16:47.041379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.718 [2024-07-26 01:16:47.041409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-26 01:16:47.051224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.718 [2024-07-26 01:16:47.051335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.718 [2024-07-26 01:16:47.051371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.718 [2024-07-26 01:16:47.051386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.718 [2024-07-26 01:16:47.051400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.718 [2024-07-26 01:16:47.051434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-26 01:16:47.061266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.718 [2024-07-26 01:16:47.061374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.718 [2024-07-26 01:16:47.061406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.718 [2024-07-26 01:16:47.061421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.718 [2024-07-26 01:16:47.061435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.718 [2024-07-26 01:16:47.061465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-26 01:16:47.071278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.718 [2024-07-26 01:16:47.071415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.718 [2024-07-26 01:16:47.071441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.718 [2024-07-26 01:16:47.071456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.718 [2024-07-26 01:16:47.071470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.718 [2024-07-26 01:16:47.071499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-26 01:16:47.081303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.718 [2024-07-26 01:16:47.081448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.718 [2024-07-26 01:16:47.081474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.718 [2024-07-26 01:16:47.081488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.718 [2024-07-26 01:16:47.081501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.718 [2024-07-26 01:16:47.081530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.718 qpair failed and we were unable to recover it. 00:34:16.718 [2024-07-26 01:16:47.091362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.719 [2024-07-26 01:16:47.091482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.719 [2024-07-26 01:16:47.091508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.719 [2024-07-26 01:16:47.091523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.719 [2024-07-26 01:16:47.091536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.719 [2024-07-26 01:16:47.091566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-26 01:16:47.101363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.719 [2024-07-26 01:16:47.101493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.719 [2024-07-26 01:16:47.101519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.719 [2024-07-26 01:16:47.101533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.719 [2024-07-26 01:16:47.101546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.719 [2024-07-26 01:16:47.101576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-26 01:16:47.111359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.719 [2024-07-26 01:16:47.111469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.719 [2024-07-26 01:16:47.111500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.719 [2024-07-26 01:16:47.111516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.719 [2024-07-26 01:16:47.111529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.719 [2024-07-26 01:16:47.111558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-26 01:16:47.121501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.719 [2024-07-26 01:16:47.121616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.719 [2024-07-26 01:16:47.121641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.719 [2024-07-26 01:16:47.121656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.719 [2024-07-26 01:16:47.121669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.719 [2024-07-26 01:16:47.121698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-26 01:16:47.131412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.719 [2024-07-26 01:16:47.131523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.719 [2024-07-26 01:16:47.131549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.719 [2024-07-26 01:16:47.131564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.719 [2024-07-26 01:16:47.131577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.719 [2024-07-26 01:16:47.131618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.719 [2024-07-26 01:16:47.141489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.719 [2024-07-26 01:16:47.141609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.719 [2024-07-26 01:16:47.141636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.719 [2024-07-26 01:16:47.141651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.719 [2024-07-26 01:16:47.141664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.719 [2024-07-26 01:16:47.141693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.719 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-26 01:16:47.151466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.978 [2024-07-26 01:16:47.151572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.978 [2024-07-26 01:16:47.151599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.978 [2024-07-26 01:16:47.151613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.978 [2024-07-26 01:16:47.151626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.978 [2024-07-26 01:16:47.151662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-26 01:16:47.161557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.978 [2024-07-26 01:16:47.161668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.978 [2024-07-26 01:16:47.161695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.978 [2024-07-26 01:16:47.161709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.978 [2024-07-26 01:16:47.161723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.978 [2024-07-26 01:16:47.161753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-26 01:16:47.171539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.978 [2024-07-26 01:16:47.171660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.978 [2024-07-26 01:16:47.171687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.978 [2024-07-26 01:16:47.171702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.978 [2024-07-26 01:16:47.171715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.978 [2024-07-26 01:16:47.171745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-26 01:16:47.181546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.978 [2024-07-26 01:16:47.181677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.978 [2024-07-26 01:16:47.181705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.978 [2024-07-26 01:16:47.181720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.978 [2024-07-26 01:16:47.181733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.978 [2024-07-26 01:16:47.181762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-26 01:16:47.191602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.978 [2024-07-26 01:16:47.191707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.978 [2024-07-26 01:16:47.191733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.978 [2024-07-26 01:16:47.191748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.978 [2024-07-26 01:16:47.191761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.978 [2024-07-26 01:16:47.191790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-26 01:16:47.201693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.978 [2024-07-26 01:16:47.201847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.978 [2024-07-26 01:16:47.201879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.978 [2024-07-26 01:16:47.201894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.978 [2024-07-26 01:16:47.201907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.978 [2024-07-26 01:16:47.201936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-26 01:16:47.211647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.978 [2024-07-26 01:16:47.211758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.978 [2024-07-26 01:16:47.211784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.978 [2024-07-26 01:16:47.211799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.978 [2024-07-26 01:16:47.211812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.978 [2024-07-26 01:16:47.211858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-26 01:16:47.221697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.978 [2024-07-26 01:16:47.221810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.978 [2024-07-26 01:16:47.221836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.978 [2024-07-26 01:16:47.221851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.978 [2024-07-26 01:16:47.221863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.978 [2024-07-26 01:16:47.221892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-26 01:16:47.231741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.978 [2024-07-26 01:16:47.231871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.978 [2024-07-26 01:16:47.231897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.978 [2024-07-26 01:16:47.231911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.978 [2024-07-26 01:16:47.231924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.978 [2024-07-26 01:16:47.231954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-26 01:16:47.241753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.978 [2024-07-26 01:16:47.241913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.978 [2024-07-26 01:16:47.241940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.978 [2024-07-26 01:16:47.241955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.978 [2024-07-26 01:16:47.241977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.978 [2024-07-26 01:16:47.242010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-26 01:16:47.251834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.978 [2024-07-26 01:16:47.251972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.978 [2024-07-26 01:16:47.251999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.978 [2024-07-26 01:16:47.252013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.978 [2024-07-26 01:16:47.252026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.978 [2024-07-26 01:16:47.252055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-26 01:16:47.261785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.978 [2024-07-26 01:16:47.261884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.978 [2024-07-26 01:16:47.261910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.978 [2024-07-26 01:16:47.261924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.978 [2024-07-26 01:16:47.261937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.978 [2024-07-26 01:16:47.261967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-26 01:16:47.271805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.978 [2024-07-26 01:16:47.271929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.978 [2024-07-26 01:16:47.271955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.978 [2024-07-26 01:16:47.271969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.978 [2024-07-26 01:16:47.271982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.979 [2024-07-26 01:16:47.272011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-26 01:16:47.281864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.979 [2024-07-26 01:16:47.282024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.979 [2024-07-26 01:16:47.282049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.979 [2024-07-26 01:16:47.282070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.979 [2024-07-26 01:16:47.282084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.979 [2024-07-26 01:16:47.282114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-26 01:16:47.291856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.979 [2024-07-26 01:16:47.291965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.979 [2024-07-26 01:16:47.291991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.979 [2024-07-26 01:16:47.292006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.979 [2024-07-26 01:16:47.292019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.979 [2024-07-26 01:16:47.292047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-26 01:16:47.301928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.979 [2024-07-26 01:16:47.302035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.979 [2024-07-26 01:16:47.302069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.979 [2024-07-26 01:16:47.302087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.979 [2024-07-26 01:16:47.302101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.979 [2024-07-26 01:16:47.302146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-26 01:16:47.312030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.979 [2024-07-26 01:16:47.312171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.979 [2024-07-26 01:16:47.312198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.979 [2024-07-26 01:16:47.312212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.979 [2024-07-26 01:16:47.312225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.979 [2024-07-26 01:16:47.312255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-26 01:16:47.321957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.979 [2024-07-26 01:16:47.322117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.979 [2024-07-26 01:16:47.322143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.979 [2024-07-26 01:16:47.322158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.979 [2024-07-26 01:16:47.322170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.979 [2024-07-26 01:16:47.322200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-26 01:16:47.332105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.979 [2024-07-26 01:16:47.332214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.979 [2024-07-26 01:16:47.332241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.979 [2024-07-26 01:16:47.332256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.979 [2024-07-26 01:16:47.332274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.979 [2024-07-26 01:16:47.332306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-26 01:16:47.342001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.979 [2024-07-26 01:16:47.342111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.979 [2024-07-26 01:16:47.342137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.979 [2024-07-26 01:16:47.342152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.979 [2024-07-26 01:16:47.342165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.979 [2024-07-26 01:16:47.342194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-26 01:16:47.352076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.979 [2024-07-26 01:16:47.352184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.979 [2024-07-26 01:16:47.352210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.979 [2024-07-26 01:16:47.352225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.979 [2024-07-26 01:16:47.352237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.979 [2024-07-26 01:16:47.352266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-26 01:16:47.362079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.979 [2024-07-26 01:16:47.362202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.979 [2024-07-26 01:16:47.362229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.979 [2024-07-26 01:16:47.362243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.979 [2024-07-26 01:16:47.362256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.979 [2024-07-26 01:16:47.362285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-26 01:16:47.372094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.979 [2024-07-26 01:16:47.372206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.979 [2024-07-26 01:16:47.372232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.979 [2024-07-26 01:16:47.372247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.979 [2024-07-26 01:16:47.372260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.979 [2024-07-26 01:16:47.372304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-26 01:16:47.382144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.979 [2024-07-26 01:16:47.382254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.979 [2024-07-26 01:16:47.382280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.979 [2024-07-26 01:16:47.382295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.979 [2024-07-26 01:16:47.382308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.979 [2024-07-26 01:16:47.382338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-26 01:16:47.392221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.979 [2024-07-26 01:16:47.392358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.979 [2024-07-26 01:16:47.392384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.979 [2024-07-26 01:16:47.392399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.979 [2024-07-26 01:16:47.392412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.979 [2024-07-26 01:16:47.392443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-26 01:16:47.402175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.979 [2024-07-26 01:16:47.402302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.979 [2024-07-26 01:16:47.402328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.979 [2024-07-26 01:16:47.402342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.979 [2024-07-26 01:16:47.402355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:16.979 [2024-07-26 01:16:47.402386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:16.980 qpair failed and we were unable to recover it. 00:34:17.238 [2024-07-26 01:16:47.412205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.238 [2024-07-26 01:16:47.412312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.238 [2024-07-26 01:16:47.412339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.238 [2024-07-26 01:16:47.412353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.238 [2024-07-26 01:16:47.412366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.238 [2024-07-26 01:16:47.412397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.238 qpair failed and we were unable to recover it. 00:34:17.238 [2024-07-26 01:16:47.422222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.238 [2024-07-26 01:16:47.422344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.238 [2024-07-26 01:16:47.422371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.238 [2024-07-26 01:16:47.422392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.238 [2024-07-26 01:16:47.422406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.238 [2024-07-26 01:16:47.422435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.238 qpair failed and we were unable to recover it. 00:34:17.238 [2024-07-26 01:16:47.432251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.238 [2024-07-26 01:16:47.432357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.238 [2024-07-26 01:16:47.432384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.238 [2024-07-26 01:16:47.432398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.238 [2024-07-26 01:16:47.432411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.238 [2024-07-26 01:16:47.432441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.238 qpair failed and we were unable to recover it. 00:34:17.238 [2024-07-26 01:16:47.442308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.238 [2024-07-26 01:16:47.442450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.238 [2024-07-26 01:16:47.442477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.238 [2024-07-26 01:16:47.442491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.238 [2024-07-26 01:16:47.442505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.238 [2024-07-26 01:16:47.442534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.238 qpair failed and we were unable to recover it. 00:34:17.238 [2024-07-26 01:16:47.452357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.238 [2024-07-26 01:16:47.452487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.238 [2024-07-26 01:16:47.452515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.238 [2024-07-26 01:16:47.452530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.238 [2024-07-26 01:16:47.452547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.238 [2024-07-26 01:16:47.452579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.238 qpair failed and we were unable to recover it. 00:34:17.238 [2024-07-26 01:16:47.462364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.238 [2024-07-26 01:16:47.462472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.238 [2024-07-26 01:16:47.462502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.238 [2024-07-26 01:16:47.462519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.238 [2024-07-26 01:16:47.462533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.238 [2024-07-26 01:16:47.462563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.238 qpair failed and we were unable to recover it. 00:34:17.238 [2024-07-26 01:16:47.472361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.238 [2024-07-26 01:16:47.472472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.238 [2024-07-26 01:16:47.472499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.238 [2024-07-26 01:16:47.472514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.238 [2024-07-26 01:16:47.472527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.239 [2024-07-26 01:16:47.472570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.239 qpair failed and we were unable to recover it. 00:34:17.239 [2024-07-26 01:16:47.482435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.239 [2024-07-26 01:16:47.482598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.239 [2024-07-26 01:16:47.482624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.239 [2024-07-26 01:16:47.482639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.239 [2024-07-26 01:16:47.482653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.239 [2024-07-26 01:16:47.482683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.239 qpair failed and we were unable to recover it. 00:34:17.239 [2024-07-26 01:16:47.492495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.239 [2024-07-26 01:16:47.492606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.239 [2024-07-26 01:16:47.492633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.239 [2024-07-26 01:16:47.492647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.239 [2024-07-26 01:16:47.492660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.239 [2024-07-26 01:16:47.492690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.239 qpair failed and we were unable to recover it. 00:34:17.239 [2024-07-26 01:16:47.502449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.239 [2024-07-26 01:16:47.502577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.239 [2024-07-26 01:16:47.502604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.239 [2024-07-26 01:16:47.502619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.239 [2024-07-26 01:16:47.502632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.239 [2024-07-26 01:16:47.502663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.239 qpair failed and we were unable to recover it. 00:34:17.239 [2024-07-26 01:16:47.512508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.239 [2024-07-26 01:16:47.512626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.239 [2024-07-26 01:16:47.512657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.239 [2024-07-26 01:16:47.512672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.239 [2024-07-26 01:16:47.512685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.239 [2024-07-26 01:16:47.512717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.239 qpair failed and we were unable to recover it. 00:34:17.239 [2024-07-26 01:16:47.522564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.239 [2024-07-26 01:16:47.522716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.239 [2024-07-26 01:16:47.522742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.239 [2024-07-26 01:16:47.522757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.239 [2024-07-26 01:16:47.522770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.239 [2024-07-26 01:16:47.522800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.239 qpair failed and we were unable to recover it. 00:34:17.239 [2024-07-26 01:16:47.532550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.239 [2024-07-26 01:16:47.532659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.239 [2024-07-26 01:16:47.532685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.239 [2024-07-26 01:16:47.532700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.239 [2024-07-26 01:16:47.532713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.239 [2024-07-26 01:16:47.532744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.239 qpair failed and we were unable to recover it. 00:34:17.239 [2024-07-26 01:16:47.542612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.239 [2024-07-26 01:16:47.542716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.239 [2024-07-26 01:16:47.542742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.239 [2024-07-26 01:16:47.542757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.239 [2024-07-26 01:16:47.542770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.239 [2024-07-26 01:16:47.542811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.239 qpair failed and we were unable to recover it. 00:34:17.239 [2024-07-26 01:16:47.552581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.239 [2024-07-26 01:16:47.552708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.239 [2024-07-26 01:16:47.552734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.239 [2024-07-26 01:16:47.552749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.239 [2024-07-26 01:16:47.552761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.239 [2024-07-26 01:16:47.552796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.239 qpair failed and we were unable to recover it. 00:34:17.239 [2024-07-26 01:16:47.562628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.239 [2024-07-26 01:16:47.562735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.239 [2024-07-26 01:16:47.562761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.239 [2024-07-26 01:16:47.562776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.239 [2024-07-26 01:16:47.562789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.239 [2024-07-26 01:16:47.562817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.239 qpair failed and we were unable to recover it. 00:34:17.239 [2024-07-26 01:16:47.572663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.239 [2024-07-26 01:16:47.572812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.239 [2024-07-26 01:16:47.572841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.239 [2024-07-26 01:16:47.572855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.239 [2024-07-26 01:16:47.572868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.239 [2024-07-26 01:16:47.572899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.239 qpair failed and we were unable to recover it. 00:34:17.239 [2024-07-26 01:16:47.582670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.239 [2024-07-26 01:16:47.582775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.239 [2024-07-26 01:16:47.582802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.239 [2024-07-26 01:16:47.582817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.239 [2024-07-26 01:16:47.582830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.239 [2024-07-26 01:16:47.582862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.239 qpair failed and we were unable to recover it. 00:34:17.239 [2024-07-26 01:16:47.592693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.239 [2024-07-26 01:16:47.592823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.239 [2024-07-26 01:16:47.592849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.239 [2024-07-26 01:16:47.592864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.239 [2024-07-26 01:16:47.592877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.239 [2024-07-26 01:16:47.592906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.239 qpair failed and we were unable to recover it. 00:34:17.239 [2024-07-26 01:16:47.602754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.239 [2024-07-26 01:16:47.602862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.239 [2024-07-26 01:16:47.602893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.239 [2024-07-26 01:16:47.602909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.239 [2024-07-26 01:16:47.602922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.239 [2024-07-26 01:16:47.602954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.239 qpair failed and we were unable to recover it. 00:34:17.240 [2024-07-26 01:16:47.612759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.240 [2024-07-26 01:16:47.612869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.240 [2024-07-26 01:16:47.612896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.240 [2024-07-26 01:16:47.612911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.240 [2024-07-26 01:16:47.612924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.240 [2024-07-26 01:16:47.612954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.240 qpair failed and we were unable to recover it. 00:34:17.240 [2024-07-26 01:16:47.622829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.240 [2024-07-26 01:16:47.622937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.240 [2024-07-26 01:16:47.622963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.240 [2024-07-26 01:16:47.622978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.240 [2024-07-26 01:16:47.622991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.240 [2024-07-26 01:16:47.623020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.240 qpair failed and we were unable to recover it. 00:34:17.240 [2024-07-26 01:16:47.632827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.240 [2024-07-26 01:16:47.632951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.240 [2024-07-26 01:16:47.632977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.240 [2024-07-26 01:16:47.632992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.240 [2024-07-26 01:16:47.633005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.240 [2024-07-26 01:16:47.633033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.240 qpair failed and we were unable to recover it. 00:34:17.240 [2024-07-26 01:16:47.642865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.240 [2024-07-26 01:16:47.643005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.240 [2024-07-26 01:16:47.643031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.240 [2024-07-26 01:16:47.643045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.240 [2024-07-26 01:16:47.643064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.240 [2024-07-26 01:16:47.643102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.240 qpair failed and we were unable to recover it. 00:34:17.240 [2024-07-26 01:16:47.652876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.240 [2024-07-26 01:16:47.653005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.240 [2024-07-26 01:16:47.653031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.240 [2024-07-26 01:16:47.653045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.240 [2024-07-26 01:16:47.653064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.240 [2024-07-26 01:16:47.653097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.240 qpair failed and we were unable to recover it. 00:34:17.240 [2024-07-26 01:16:47.662885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.240 [2024-07-26 01:16:47.662989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.240 [2024-07-26 01:16:47.663015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.240 [2024-07-26 01:16:47.663030] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.240 [2024-07-26 01:16:47.663043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.240 [2024-07-26 01:16:47.663078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.240 qpair failed and we were unable to recover it. 00:34:17.499 [2024-07-26 01:16:47.672954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.499 [2024-07-26 01:16:47.673107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.499 [2024-07-26 01:16:47.673133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.499 [2024-07-26 01:16:47.673148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.499 [2024-07-26 01:16:47.673161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.499 [2024-07-26 01:16:47.673190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.499 qpair failed and we were unable to recover it. 00:34:17.499 [2024-07-26 01:16:47.682961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.499 [2024-07-26 01:16:47.683080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.499 [2024-07-26 01:16:47.683116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.499 [2024-07-26 01:16:47.683132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.499 [2024-07-26 01:16:47.683145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.499 [2024-07-26 01:16:47.683177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.499 qpair failed and we were unable to recover it. 00:34:17.499 [2024-07-26 01:16:47.692963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.499 [2024-07-26 01:16:47.693115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.499 [2024-07-26 01:16:47.693141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.499 [2024-07-26 01:16:47.693156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.499 [2024-07-26 01:16:47.693169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.499 [2024-07-26 01:16:47.693198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.499 qpair failed and we were unable to recover it. 00:34:17.499 [2024-07-26 01:16:47.703063] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.499 [2024-07-26 01:16:47.703179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.499 [2024-07-26 01:16:47.703206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.499 [2024-07-26 01:16:47.703221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.499 [2024-07-26 01:16:47.703238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.499 [2024-07-26 01:16:47.703268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.499 qpair failed and we were unable to recover it. 00:34:17.499 [2024-07-26 01:16:47.713077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.499 [2024-07-26 01:16:47.713189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.499 [2024-07-26 01:16:47.713215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.499 [2024-07-26 01:16:47.713230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.499 [2024-07-26 01:16:47.713243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.499 [2024-07-26 01:16:47.713273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.499 qpair failed and we were unable to recover it. 00:34:17.499 [2024-07-26 01:16:47.723081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.499 [2024-07-26 01:16:47.723211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.499 [2024-07-26 01:16:47.723237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.499 [2024-07-26 01:16:47.723252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.499 [2024-07-26 01:16:47.723265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.499 [2024-07-26 01:16:47.723294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.499 qpair failed and we were unable to recover it. 00:34:17.499 [2024-07-26 01:16:47.733090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.499 [2024-07-26 01:16:47.733207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.499 [2024-07-26 01:16:47.733233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.499 [2024-07-26 01:16:47.733248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.499 [2024-07-26 01:16:47.733266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.499 [2024-07-26 01:16:47.733297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.499 qpair failed and we were unable to recover it. 00:34:17.499 [2024-07-26 01:16:47.743153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.499 [2024-07-26 01:16:47.743258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.499 [2024-07-26 01:16:47.743283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.499 [2024-07-26 01:16:47.743298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.499 [2024-07-26 01:16:47.743311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.499 [2024-07-26 01:16:47.743340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.499 qpair failed and we were unable to recover it. 00:34:17.499 [2024-07-26 01:16:47.753171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.499 [2024-07-26 01:16:47.753305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.499 [2024-07-26 01:16:47.753330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.499 [2024-07-26 01:16:47.753345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.499 [2024-07-26 01:16:47.753358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.499 [2024-07-26 01:16:47.753387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.499 qpair failed and we were unable to recover it. 00:34:17.499 [2024-07-26 01:16:47.763206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.499 [2024-07-26 01:16:47.763333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.499 [2024-07-26 01:16:47.763358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.499 [2024-07-26 01:16:47.763373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.499 [2024-07-26 01:16:47.763386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.499 [2024-07-26 01:16:47.763415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.499 qpair failed and we were unable to recover it. 00:34:17.499 [2024-07-26 01:16:47.773209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.499 [2024-07-26 01:16:47.773362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.499 [2024-07-26 01:16:47.773387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.499 [2024-07-26 01:16:47.773402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.499 [2024-07-26 01:16:47.773415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.499 [2024-07-26 01:16:47.773444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.499 qpair failed and we were unable to recover it. 00:34:17.499 [2024-07-26 01:16:47.783271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.499 [2024-07-26 01:16:47.783382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.499 [2024-07-26 01:16:47.783407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.499 [2024-07-26 01:16:47.783421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.499 [2024-07-26 01:16:47.783434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.500 [2024-07-26 01:16:47.783464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.500 qpair failed and we were unable to recover it. 00:34:17.500 [2024-07-26 01:16:47.793278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.500 [2024-07-26 01:16:47.793388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.500 [2024-07-26 01:16:47.793414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.500 [2024-07-26 01:16:47.793429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.500 [2024-07-26 01:16:47.793442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.500 [2024-07-26 01:16:47.793474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.500 qpair failed and we were unable to recover it. 00:34:17.500 [2024-07-26 01:16:47.803369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.500 [2024-07-26 01:16:47.803524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.500 [2024-07-26 01:16:47.803550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.500 [2024-07-26 01:16:47.803564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.500 [2024-07-26 01:16:47.803577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.500 [2024-07-26 01:16:47.803606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.500 qpair failed and we were unable to recover it. 00:34:17.500 [2024-07-26 01:16:47.813331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.500 [2024-07-26 01:16:47.813442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.500 [2024-07-26 01:16:47.813468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.500 [2024-07-26 01:16:47.813482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.500 [2024-07-26 01:16:47.813496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.500 [2024-07-26 01:16:47.813525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.500 qpair failed and we were unable to recover it. 00:34:17.500 [2024-07-26 01:16:47.823345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.500 [2024-07-26 01:16:47.823463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.500 [2024-07-26 01:16:47.823490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.500 [2024-07-26 01:16:47.823510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.500 [2024-07-26 01:16:47.823524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.500 [2024-07-26 01:16:47.823554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.500 qpair failed and we were unable to recover it. 00:34:17.500 [2024-07-26 01:16:47.833384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.500 [2024-07-26 01:16:47.833528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.500 [2024-07-26 01:16:47.833554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.500 [2024-07-26 01:16:47.833569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.500 [2024-07-26 01:16:47.833582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.500 [2024-07-26 01:16:47.833612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.500 qpair failed and we were unable to recover it. 00:34:17.500 [2024-07-26 01:16:47.843443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.500 [2024-07-26 01:16:47.843556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.500 [2024-07-26 01:16:47.843582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.500 [2024-07-26 01:16:47.843596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.500 [2024-07-26 01:16:47.843609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.500 [2024-07-26 01:16:47.843639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.500 qpair failed and we were unable to recover it. 00:34:17.500 [2024-07-26 01:16:47.853482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.500 [2024-07-26 01:16:47.853595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.500 [2024-07-26 01:16:47.853622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.500 [2024-07-26 01:16:47.853637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.500 [2024-07-26 01:16:47.853654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.500 [2024-07-26 01:16:47.853685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.500 qpair failed and we were unable to recover it. 00:34:17.500 [2024-07-26 01:16:47.863605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.500 [2024-07-26 01:16:47.863745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.500 [2024-07-26 01:16:47.863772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.500 [2024-07-26 01:16:47.863787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.500 [2024-07-26 01:16:47.863798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.500 [2024-07-26 01:16:47.863827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.500 qpair failed and we were unable to recover it. 00:34:17.500 [2024-07-26 01:16:47.873544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.500 [2024-07-26 01:16:47.873658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.500 [2024-07-26 01:16:47.873684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.500 [2024-07-26 01:16:47.873698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.500 [2024-07-26 01:16:47.873712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.500 [2024-07-26 01:16:47.873741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.500 qpair failed and we were unable to recover it. 00:34:17.500 [2024-07-26 01:16:47.883526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.500 [2024-07-26 01:16:47.883636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.500 [2024-07-26 01:16:47.883662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.500 [2024-07-26 01:16:47.883677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.500 [2024-07-26 01:16:47.883690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.500 [2024-07-26 01:16:47.883719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.500 qpair failed and we were unable to recover it. 00:34:17.500 [2024-07-26 01:16:47.893563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.500 [2024-07-26 01:16:47.893669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.500 [2024-07-26 01:16:47.893695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.500 [2024-07-26 01:16:47.893709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.500 [2024-07-26 01:16:47.893721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.501 [2024-07-26 01:16:47.893750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.501 qpair failed and we were unable to recover it. 00:34:17.501 [2024-07-26 01:16:47.903606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.501 [2024-07-26 01:16:47.903737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.501 [2024-07-26 01:16:47.903763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.501 [2024-07-26 01:16:47.903777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.501 [2024-07-26 01:16:47.903790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.501 [2024-07-26 01:16:47.903818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.501 qpair failed and we were unable to recover it. 00:34:17.501 [2024-07-26 01:16:47.913639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.501 [2024-07-26 01:16:47.913773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.501 [2024-07-26 01:16:47.913802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.501 [2024-07-26 01:16:47.913817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.501 [2024-07-26 01:16:47.913829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.501 [2024-07-26 01:16:47.913871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.501 qpair failed and we were unable to recover it. 00:34:17.501 [2024-07-26 01:16:47.923726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.501 [2024-07-26 01:16:47.923870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.501 [2024-07-26 01:16:47.923896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.501 [2024-07-26 01:16:47.923911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.501 [2024-07-26 01:16:47.923924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.501 [2024-07-26 01:16:47.923967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.501 qpair failed and we were unable to recover it. 00:34:17.760 [2024-07-26 01:16:47.933668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.760 [2024-07-26 01:16:47.933795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.760 [2024-07-26 01:16:47.933821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.760 [2024-07-26 01:16:47.933836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.760 [2024-07-26 01:16:47.933849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.760 [2024-07-26 01:16:47.933877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.760 qpair failed and we were unable to recover it. 00:34:17.760 [2024-07-26 01:16:47.943709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.760 [2024-07-26 01:16:47.943838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.760 [2024-07-26 01:16:47.943865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.760 [2024-07-26 01:16:47.943879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.760 [2024-07-26 01:16:47.943892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.760 [2024-07-26 01:16:47.943935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.760 qpair failed and we were unable to recover it. 00:34:17.760 [2024-07-26 01:16:47.953736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.760 [2024-07-26 01:16:47.953847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.760 [2024-07-26 01:16:47.953874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.760 [2024-07-26 01:16:47.953888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.760 [2024-07-26 01:16:47.953901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.760 [2024-07-26 01:16:47.953936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.760 qpair failed and we were unable to recover it. 00:34:17.760 [2024-07-26 01:16:47.963778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.760 [2024-07-26 01:16:47.963908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.760 [2024-07-26 01:16:47.963933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.760 [2024-07-26 01:16:47.963948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.760 [2024-07-26 01:16:47.963961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.760 [2024-07-26 01:16:47.963989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.760 qpair failed and we were unable to recover it. 00:34:17.760 [2024-07-26 01:16:47.973800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.760 [2024-07-26 01:16:47.973916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.760 [2024-07-26 01:16:47.973942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.760 [2024-07-26 01:16:47.973956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.760 [2024-07-26 01:16:47.973969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.760 [2024-07-26 01:16:47.973998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.760 qpair failed and we were unable to recover it. 00:34:17.760 [2024-07-26 01:16:47.983840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.760 [2024-07-26 01:16:47.983970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.760 [2024-07-26 01:16:47.983996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.760 [2024-07-26 01:16:47.984010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.760 [2024-07-26 01:16:47.984024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.760 [2024-07-26 01:16:47.984052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.760 qpair failed and we were unable to recover it. 00:34:17.760 [2024-07-26 01:16:47.993884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.760 [2024-07-26 01:16:47.994004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.760 [2024-07-26 01:16:47.994030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.760 [2024-07-26 01:16:47.994045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.760 [2024-07-26 01:16:47.994069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.760 [2024-07-26 01:16:47.994102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.760 qpair failed and we were unable to recover it. 00:34:17.760 [2024-07-26 01:16:48.003912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.760 [2024-07-26 01:16:48.004025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.760 [2024-07-26 01:16:48.004056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.760 [2024-07-26 01:16:48.004078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.760 [2024-07-26 01:16:48.004093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.760 [2024-07-26 01:16:48.004123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.760 qpair failed and we were unable to recover it. 00:34:17.760 [2024-07-26 01:16:48.013943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.760 [2024-07-26 01:16:48.014074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.760 [2024-07-26 01:16:48.014100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.760 [2024-07-26 01:16:48.014115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.760 [2024-07-26 01:16:48.014128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.760 [2024-07-26 01:16:48.014159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.760 qpair failed and we were unable to recover it. 00:34:17.760 [2024-07-26 01:16:48.023918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.760 [2024-07-26 01:16:48.024021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.761 [2024-07-26 01:16:48.024046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.761 [2024-07-26 01:16:48.024067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.761 [2024-07-26 01:16:48.024082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.761 [2024-07-26 01:16:48.024112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.761 qpair failed and we were unable to recover it. 00:34:17.761 [2024-07-26 01:16:48.034005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.761 [2024-07-26 01:16:48.034115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.761 [2024-07-26 01:16:48.034140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.761 [2024-07-26 01:16:48.034155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.761 [2024-07-26 01:16:48.034168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.761 [2024-07-26 01:16:48.034198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.761 qpair failed and we were unable to recover it. 00:34:17.761 [2024-07-26 01:16:48.044021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.761 [2024-07-26 01:16:48.044138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.761 [2024-07-26 01:16:48.044164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.761 [2024-07-26 01:16:48.044179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.761 [2024-07-26 01:16:48.044191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.761 [2024-07-26 01:16:48.044227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.761 qpair failed and we were unable to recover it. 00:34:17.761 [2024-07-26 01:16:48.054030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.761 [2024-07-26 01:16:48.054154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.761 [2024-07-26 01:16:48.054180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.761 [2024-07-26 01:16:48.054195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.761 [2024-07-26 01:16:48.054208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.761 [2024-07-26 01:16:48.054251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.761 qpair failed and we were unable to recover it. 00:34:17.761 [2024-07-26 01:16:48.064041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.761 [2024-07-26 01:16:48.064199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.761 [2024-07-26 01:16:48.064226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.761 [2024-07-26 01:16:48.064240] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.761 [2024-07-26 01:16:48.064253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.761 [2024-07-26 01:16:48.064283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.761 qpair failed and we were unable to recover it. 00:34:17.761 [2024-07-26 01:16:48.074108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.761 [2024-07-26 01:16:48.074244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.761 [2024-07-26 01:16:48.074270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.761 [2024-07-26 01:16:48.074284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.761 [2024-07-26 01:16:48.074298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.761 [2024-07-26 01:16:48.074340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.761 qpair failed and we were unable to recover it. 00:34:17.761 [2024-07-26 01:16:48.084110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.761 [2024-07-26 01:16:48.084219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.761 [2024-07-26 01:16:48.084245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.761 [2024-07-26 01:16:48.084261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.761 [2024-07-26 01:16:48.084274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.761 [2024-07-26 01:16:48.084303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.761 qpair failed and we were unable to recover it. 00:34:17.761 [2024-07-26 01:16:48.094165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.761 [2024-07-26 01:16:48.094325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.761 [2024-07-26 01:16:48.094359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.761 [2024-07-26 01:16:48.094374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.761 [2024-07-26 01:16:48.094387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.761 [2024-07-26 01:16:48.094417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.761 qpair failed and we were unable to recover it. 00:34:17.761 [2024-07-26 01:16:48.104174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.761 [2024-07-26 01:16:48.104308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.761 [2024-07-26 01:16:48.104335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.761 [2024-07-26 01:16:48.104349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.761 [2024-07-26 01:16:48.104362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.761 [2024-07-26 01:16:48.104391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.761 qpair failed and we were unable to recover it. 00:34:17.761 [2024-07-26 01:16:48.114190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.761 [2024-07-26 01:16:48.114295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.761 [2024-07-26 01:16:48.114321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.761 [2024-07-26 01:16:48.114336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.761 [2024-07-26 01:16:48.114349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.761 [2024-07-26 01:16:48.114378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.761 qpair failed and we were unable to recover it. 00:34:17.761 [2024-07-26 01:16:48.124217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.761 [2024-07-26 01:16:48.124340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.761 [2024-07-26 01:16:48.124365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.761 [2024-07-26 01:16:48.124380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.761 [2024-07-26 01:16:48.124392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.761 [2024-07-26 01:16:48.124424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.761 qpair failed and we were unable to recover it. 00:34:17.761 [2024-07-26 01:16:48.134257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.761 [2024-07-26 01:16:48.134394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.761 [2024-07-26 01:16:48.134420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.761 [2024-07-26 01:16:48.134434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.761 [2024-07-26 01:16:48.134452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.761 [2024-07-26 01:16:48.134484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.761 qpair failed and we were unable to recover it. 00:34:17.761 [2024-07-26 01:16:48.144296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.761 [2024-07-26 01:16:48.144440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.761 [2024-07-26 01:16:48.144466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.761 [2024-07-26 01:16:48.144480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.761 [2024-07-26 01:16:48.144493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.761 [2024-07-26 01:16:48.144524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.761 qpair failed and we were unable to recover it. 00:34:17.761 [2024-07-26 01:16:48.154363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.762 [2024-07-26 01:16:48.154471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.762 [2024-07-26 01:16:48.154497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.762 [2024-07-26 01:16:48.154511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.762 [2024-07-26 01:16:48.154524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.762 [2024-07-26 01:16:48.154553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.762 qpair failed and we were unable to recover it. 00:34:17.762 [2024-07-26 01:16:48.164369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.762 [2024-07-26 01:16:48.164506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.762 [2024-07-26 01:16:48.164531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.762 [2024-07-26 01:16:48.164546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.762 [2024-07-26 01:16:48.164559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.762 [2024-07-26 01:16:48.164587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.762 qpair failed and we were unable to recover it. 00:34:17.762 [2024-07-26 01:16:48.174389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.762 [2024-07-26 01:16:48.174500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.762 [2024-07-26 01:16:48.174526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.762 [2024-07-26 01:16:48.174541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.762 [2024-07-26 01:16:48.174555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.762 [2024-07-26 01:16:48.174584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.762 qpair failed and we were unable to recover it. 00:34:17.762 [2024-07-26 01:16:48.184401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.762 [2024-07-26 01:16:48.184513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.762 [2024-07-26 01:16:48.184538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.762 [2024-07-26 01:16:48.184553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.762 [2024-07-26 01:16:48.184567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:17.762 [2024-07-26 01:16:48.184596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:17.762 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-26 01:16:48.194415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.021 [2024-07-26 01:16:48.194525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.021 [2024-07-26 01:16:48.194551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.021 [2024-07-26 01:16:48.194567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.021 [2024-07-26 01:16:48.194580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.021 [2024-07-26 01:16:48.194609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-26 01:16:48.204463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.021 [2024-07-26 01:16:48.204575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.021 [2024-07-26 01:16:48.204601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.021 [2024-07-26 01:16:48.204615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.021 [2024-07-26 01:16:48.204629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.021 [2024-07-26 01:16:48.204657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-26 01:16:48.214562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.021 [2024-07-26 01:16:48.214669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.021 [2024-07-26 01:16:48.214695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.021 [2024-07-26 01:16:48.214709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.021 [2024-07-26 01:16:48.214722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.021 [2024-07-26 01:16:48.214751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-26 01:16:48.224540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.021 [2024-07-26 01:16:48.224649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.021 [2024-07-26 01:16:48.224675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.021 [2024-07-26 01:16:48.224695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.021 [2024-07-26 01:16:48.224709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.021 [2024-07-26 01:16:48.224740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-26 01:16:48.234528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.021 [2024-07-26 01:16:48.234655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.021 [2024-07-26 01:16:48.234681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.021 [2024-07-26 01:16:48.234696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.021 [2024-07-26 01:16:48.234708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.021 [2024-07-26 01:16:48.234737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-26 01:16:48.244552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.021 [2024-07-26 01:16:48.244667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.021 [2024-07-26 01:16:48.244693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.021 [2024-07-26 01:16:48.244708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.021 [2024-07-26 01:16:48.244721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.021 [2024-07-26 01:16:48.244750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-26 01:16:48.254617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.021 [2024-07-26 01:16:48.254729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.021 [2024-07-26 01:16:48.254754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.021 [2024-07-26 01:16:48.254769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.021 [2024-07-26 01:16:48.254782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.021 [2024-07-26 01:16:48.254811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.021 qpair failed and we were unable to recover it. 00:34:18.021 [2024-07-26 01:16:48.264595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.021 [2024-07-26 01:16:48.264745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.021 [2024-07-26 01:16:48.264772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.021 [2024-07-26 01:16:48.264786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.021 [2024-07-26 01:16:48.264799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.022 [2024-07-26 01:16:48.264829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-26 01:16:48.274661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.022 [2024-07-26 01:16:48.274768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.022 [2024-07-26 01:16:48.274794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.022 [2024-07-26 01:16:48.274809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.022 [2024-07-26 01:16:48.274822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.022 [2024-07-26 01:16:48.274851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-26 01:16:48.284704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.022 [2024-07-26 01:16:48.284857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.022 [2024-07-26 01:16:48.284882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.022 [2024-07-26 01:16:48.284897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.022 [2024-07-26 01:16:48.284910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.022 [2024-07-26 01:16:48.284941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-26 01:16:48.294708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.022 [2024-07-26 01:16:48.294823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.022 [2024-07-26 01:16:48.294849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.022 [2024-07-26 01:16:48.294863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.022 [2024-07-26 01:16:48.294876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.022 [2024-07-26 01:16:48.294905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-26 01:16:48.304720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.022 [2024-07-26 01:16:48.304868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.022 [2024-07-26 01:16:48.304894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.022 [2024-07-26 01:16:48.304908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.022 [2024-07-26 01:16:48.304921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.022 [2024-07-26 01:16:48.304950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-26 01:16:48.314795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.022 [2024-07-26 01:16:48.314910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.022 [2024-07-26 01:16:48.314936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.022 [2024-07-26 01:16:48.314956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.022 [2024-07-26 01:16:48.314971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.022 [2024-07-26 01:16:48.315000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-26 01:16:48.324797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.022 [2024-07-26 01:16:48.324910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.022 [2024-07-26 01:16:48.324936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.022 [2024-07-26 01:16:48.324951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.022 [2024-07-26 01:16:48.324964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.022 [2024-07-26 01:16:48.324993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-26 01:16:48.334789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.022 [2024-07-26 01:16:48.334901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.022 [2024-07-26 01:16:48.334927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.022 [2024-07-26 01:16:48.334941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.022 [2024-07-26 01:16:48.334954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.022 [2024-07-26 01:16:48.334983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-26 01:16:48.344814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.022 [2024-07-26 01:16:48.344916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.022 [2024-07-26 01:16:48.344942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.022 [2024-07-26 01:16:48.344957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.022 [2024-07-26 01:16:48.344970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.022 [2024-07-26 01:16:48.344999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-26 01:16:48.354884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.022 [2024-07-26 01:16:48.354991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.022 [2024-07-26 01:16:48.355017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.022 [2024-07-26 01:16:48.355032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.022 [2024-07-26 01:16:48.355045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.022 [2024-07-26 01:16:48.355081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-26 01:16:48.364929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.022 [2024-07-26 01:16:48.365046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.022 [2024-07-26 01:16:48.365079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.022 [2024-07-26 01:16:48.365094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.022 [2024-07-26 01:16:48.365107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.022 [2024-07-26 01:16:48.365152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-26 01:16:48.374958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.022 [2024-07-26 01:16:48.375073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.022 [2024-07-26 01:16:48.375099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.022 [2024-07-26 01:16:48.375115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.022 [2024-07-26 01:16:48.375127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.022 [2024-07-26 01:16:48.375157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-26 01:16:48.384928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.022 [2024-07-26 01:16:48.385034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.022 [2024-07-26 01:16:48.385066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.022 [2024-07-26 01:16:48.385084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.022 [2024-07-26 01:16:48.385098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.022 [2024-07-26 01:16:48.385127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.022 qpair failed and we were unable to recover it. 00:34:18.022 [2024-07-26 01:16:48.394973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.022 [2024-07-26 01:16:48.395085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.022 [2024-07-26 01:16:48.395111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.023 [2024-07-26 01:16:48.395126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.023 [2024-07-26 01:16:48.395139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.023 [2024-07-26 01:16:48.395169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-26 01:16:48.405010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.023 [2024-07-26 01:16:48.405135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.023 [2024-07-26 01:16:48.405168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.023 [2024-07-26 01:16:48.405187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.023 [2024-07-26 01:16:48.405202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.023 [2024-07-26 01:16:48.405234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-26 01:16:48.415108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.023 [2024-07-26 01:16:48.415246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.023 [2024-07-26 01:16:48.415272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.023 [2024-07-26 01:16:48.415287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.023 [2024-07-26 01:16:48.415301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.023 [2024-07-26 01:16:48.415330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-26 01:16:48.425040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.023 [2024-07-26 01:16:48.425158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.023 [2024-07-26 01:16:48.425185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.023 [2024-07-26 01:16:48.425200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.023 [2024-07-26 01:16:48.425213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.023 [2024-07-26 01:16:48.425243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-26 01:16:48.435081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.023 [2024-07-26 01:16:48.435188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.023 [2024-07-26 01:16:48.435214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.023 [2024-07-26 01:16:48.435228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.023 [2024-07-26 01:16:48.435242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.023 [2024-07-26 01:16:48.435271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.023 [2024-07-26 01:16:48.445132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.023 [2024-07-26 01:16:48.445244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.023 [2024-07-26 01:16:48.445269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.023 [2024-07-26 01:16:48.445283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.023 [2024-07-26 01:16:48.445297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.023 [2024-07-26 01:16:48.445332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.023 qpair failed and we were unable to recover it. 00:34:18.282 [2024-07-26 01:16:48.455139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.282 [2024-07-26 01:16:48.455252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.282 [2024-07-26 01:16:48.455278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.282 [2024-07-26 01:16:48.455292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.282 [2024-07-26 01:16:48.455305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.282 [2024-07-26 01:16:48.455336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.282 qpair failed and we were unable to recover it. 00:34:18.282 [2024-07-26 01:16:48.465216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.282 [2024-07-26 01:16:48.465326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.282 [2024-07-26 01:16:48.465352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.282 [2024-07-26 01:16:48.465366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.282 [2024-07-26 01:16:48.465380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.282 [2024-07-26 01:16:48.465408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.282 qpair failed and we were unable to recover it. 00:34:18.282 [2024-07-26 01:16:48.475286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.282 [2024-07-26 01:16:48.475402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.282 [2024-07-26 01:16:48.475428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.282 [2024-07-26 01:16:48.475443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.282 [2024-07-26 01:16:48.475455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.282 [2024-07-26 01:16:48.475485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.282 qpair failed and we were unable to recover it. 00:34:18.282 [2024-07-26 01:16:48.485324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.282 [2024-07-26 01:16:48.485432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.282 [2024-07-26 01:16:48.485458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.282 [2024-07-26 01:16:48.485472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.282 [2024-07-26 01:16:48.485485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.282 [2024-07-26 01:16:48.485514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.282 qpair failed and we were unable to recover it. 00:34:18.282 [2024-07-26 01:16:48.495336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.282 [2024-07-26 01:16:48.495484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.282 [2024-07-26 01:16:48.495515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.282 [2024-07-26 01:16:48.495530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.282 [2024-07-26 01:16:48.495543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.282 [2024-07-26 01:16:48.495572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.282 qpair failed and we were unable to recover it. 00:34:18.282 [2024-07-26 01:16:48.505327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.282 [2024-07-26 01:16:48.505461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.282 [2024-07-26 01:16:48.505486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.282 [2024-07-26 01:16:48.505501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.282 [2024-07-26 01:16:48.505514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.282 [2024-07-26 01:16:48.505542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.282 qpair failed and we were unable to recover it. 00:34:18.282 [2024-07-26 01:16:48.515321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.282 [2024-07-26 01:16:48.515428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.282 [2024-07-26 01:16:48.515454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.282 [2024-07-26 01:16:48.515469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.282 [2024-07-26 01:16:48.515483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.282 [2024-07-26 01:16:48.515511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.282 qpair failed and we were unable to recover it. 00:34:18.282 [2024-07-26 01:16:48.525346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.282 [2024-07-26 01:16:48.525457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.282 [2024-07-26 01:16:48.525482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.282 [2024-07-26 01:16:48.525496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.282 [2024-07-26 01:16:48.525509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.282 [2024-07-26 01:16:48.525538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.282 qpair failed and we were unable to recover it. 00:34:18.282 [2024-07-26 01:16:48.535360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.282 [2024-07-26 01:16:48.535465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.282 [2024-07-26 01:16:48.535491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.282 [2024-07-26 01:16:48.535506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.282 [2024-07-26 01:16:48.535524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.282 [2024-07-26 01:16:48.535555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.282 qpair failed and we were unable to recover it. 00:34:18.282 [2024-07-26 01:16:48.545487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.282 [2024-07-26 01:16:48.545596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.282 [2024-07-26 01:16:48.545622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.282 [2024-07-26 01:16:48.545637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.282 [2024-07-26 01:16:48.545649] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.282 [2024-07-26 01:16:48.545679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.282 qpair failed and we were unable to recover it. 00:34:18.282 [2024-07-26 01:16:48.555453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.282 [2024-07-26 01:16:48.555578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.282 [2024-07-26 01:16:48.555604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.282 [2024-07-26 01:16:48.555618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.282 [2024-07-26 01:16:48.555631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.282 [2024-07-26 01:16:48.555661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.282 qpair failed and we were unable to recover it. 00:34:18.282 [2024-07-26 01:16:48.565477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.282 [2024-07-26 01:16:48.565635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.282 [2024-07-26 01:16:48.565663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.282 [2024-07-26 01:16:48.565678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.282 [2024-07-26 01:16:48.565692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.282 [2024-07-26 01:16:48.565722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.282 qpair failed and we were unable to recover it. 00:34:18.282 [2024-07-26 01:16:48.575504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.282 [2024-07-26 01:16:48.575608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.282 [2024-07-26 01:16:48.575635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.283 [2024-07-26 01:16:48.575649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.283 [2024-07-26 01:16:48.575663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.283 [2024-07-26 01:16:48.575692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.283 qpair failed and we were unable to recover it. 00:34:18.283 [2024-07-26 01:16:48.585531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.283 [2024-07-26 01:16:48.585640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.283 [2024-07-26 01:16:48.585666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.283 [2024-07-26 01:16:48.585681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.283 [2024-07-26 01:16:48.585694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.283 [2024-07-26 01:16:48.585723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.283 qpair failed and we were unable to recover it. 00:34:18.283 [2024-07-26 01:16:48.595555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.283 [2024-07-26 01:16:48.595671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.283 [2024-07-26 01:16:48.595697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.283 [2024-07-26 01:16:48.595711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.283 [2024-07-26 01:16:48.595724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.283 [2024-07-26 01:16:48.595766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.283 qpair failed and we were unable to recover it. 00:34:18.283 [2024-07-26 01:16:48.605682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.283 [2024-07-26 01:16:48.605809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.283 [2024-07-26 01:16:48.605835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.283 [2024-07-26 01:16:48.605849] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.283 [2024-07-26 01:16:48.605862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.283 [2024-07-26 01:16:48.605891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.283 qpair failed and we were unable to recover it. 00:34:18.283 [2024-07-26 01:16:48.615572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.283 [2024-07-26 01:16:48.615713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.283 [2024-07-26 01:16:48.615739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.283 [2024-07-26 01:16:48.615753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.283 [2024-07-26 01:16:48.615766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.283 [2024-07-26 01:16:48.615795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.283 qpair failed and we were unable to recover it. 00:34:18.283 [2024-07-26 01:16:48.625654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.283 [2024-07-26 01:16:48.625762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.283 [2024-07-26 01:16:48.625788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.283 [2024-07-26 01:16:48.625808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.283 [2024-07-26 01:16:48.625822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.283 [2024-07-26 01:16:48.625851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.283 qpair failed and we were unable to recover it. 00:34:18.283 [2024-07-26 01:16:48.635633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.283 [2024-07-26 01:16:48.635739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.283 [2024-07-26 01:16:48.635765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.283 [2024-07-26 01:16:48.635780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.283 [2024-07-26 01:16:48.635793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.283 [2024-07-26 01:16:48.635822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.283 qpair failed and we were unable to recover it. 00:34:18.283 [2024-07-26 01:16:48.645692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.283 [2024-07-26 01:16:48.645818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.283 [2024-07-26 01:16:48.645843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.283 [2024-07-26 01:16:48.645858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.283 [2024-07-26 01:16:48.645871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.283 [2024-07-26 01:16:48.645900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.283 qpair failed and we were unable to recover it. 00:34:18.283 [2024-07-26 01:16:48.655702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.283 [2024-07-26 01:16:48.655820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.283 [2024-07-26 01:16:48.655846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.283 [2024-07-26 01:16:48.655860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.283 [2024-07-26 01:16:48.655873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.283 [2024-07-26 01:16:48.655903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.283 qpair failed and we were unable to recover it. 00:34:18.283 [2024-07-26 01:16:48.665807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.283 [2024-07-26 01:16:48.665962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.283 [2024-07-26 01:16:48.665988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.283 [2024-07-26 01:16:48.666003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.283 [2024-07-26 01:16:48.666016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.283 [2024-07-26 01:16:48.666045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.283 qpair failed and we were unable to recover it. 00:34:18.283 [2024-07-26 01:16:48.675785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.283 [2024-07-26 01:16:48.675917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.283 [2024-07-26 01:16:48.675944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.283 [2024-07-26 01:16:48.675958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.283 [2024-07-26 01:16:48.675971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.283 [2024-07-26 01:16:48.676002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.283 qpair failed and we were unable to recover it. 00:34:18.283 [2024-07-26 01:16:48.685798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.283 [2024-07-26 01:16:48.685920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.283 [2024-07-26 01:16:48.685945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.283 [2024-07-26 01:16:48.685960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.283 [2024-07-26 01:16:48.685974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.283 [2024-07-26 01:16:48.686003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.283 qpair failed and we were unable to recover it. 00:34:18.283 [2024-07-26 01:16:48.695810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.283 [2024-07-26 01:16:48.695924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.283 [2024-07-26 01:16:48.695950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.283 [2024-07-26 01:16:48.695965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.283 [2024-07-26 01:16:48.695981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.283 [2024-07-26 01:16:48.696010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.283 qpair failed and we were unable to recover it. 00:34:18.283 [2024-07-26 01:16:48.705867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.283 [2024-07-26 01:16:48.705979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.283 [2024-07-26 01:16:48.706004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.284 [2024-07-26 01:16:48.706019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.284 [2024-07-26 01:16:48.706032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.284 [2024-07-26 01:16:48.706067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.284 qpair failed and we were unable to recover it. 00:34:18.542 [2024-07-26 01:16:48.715861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.542 [2024-07-26 01:16:48.715970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.542 [2024-07-26 01:16:48.715996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.542 [2024-07-26 01:16:48.716017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.542 [2024-07-26 01:16:48.716031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.542 [2024-07-26 01:16:48.716067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.542 qpair failed and we were unable to recover it. 00:34:18.542 [2024-07-26 01:16:48.725919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.542 [2024-07-26 01:16:48.726031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.542 [2024-07-26 01:16:48.726057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.542 [2024-07-26 01:16:48.726079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.542 [2024-07-26 01:16:48.726092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.542 [2024-07-26 01:16:48.726122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.542 qpair failed and we were unable to recover it. 00:34:18.542 [2024-07-26 01:16:48.735935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.542 [2024-07-26 01:16:48.736038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.542 [2024-07-26 01:16:48.736080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.542 [2024-07-26 01:16:48.736095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.542 [2024-07-26 01:16:48.736109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.542 [2024-07-26 01:16:48.736138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.542 qpair failed and we were unable to recover it. 00:34:18.542 [2024-07-26 01:16:48.745951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.542 [2024-07-26 01:16:48.746066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.543 [2024-07-26 01:16:48.746092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.543 [2024-07-26 01:16:48.746106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.543 [2024-07-26 01:16:48.746120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.543 [2024-07-26 01:16:48.746149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-26 01:16:48.755967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.543 [2024-07-26 01:16:48.756080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.543 [2024-07-26 01:16:48.756106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.543 [2024-07-26 01:16:48.756121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.543 [2024-07-26 01:16:48.756134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.543 [2024-07-26 01:16:48.756165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-26 01:16:48.766075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.543 [2024-07-26 01:16:48.766190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.543 [2024-07-26 01:16:48.766216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.543 [2024-07-26 01:16:48.766231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.543 [2024-07-26 01:16:48.766244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.543 [2024-07-26 01:16:48.766273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-26 01:16:48.776095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.543 [2024-07-26 01:16:48.776244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.543 [2024-07-26 01:16:48.776270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.543 [2024-07-26 01:16:48.776285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.543 [2024-07-26 01:16:48.776298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.543 [2024-07-26 01:16:48.776327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-26 01:16:48.786069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.543 [2024-07-26 01:16:48.786183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.543 [2024-07-26 01:16:48.786209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.543 [2024-07-26 01:16:48.786223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.543 [2024-07-26 01:16:48.786236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.543 [2024-07-26 01:16:48.786267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-26 01:16:48.796110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.543 [2024-07-26 01:16:48.796226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.543 [2024-07-26 01:16:48.796252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.543 [2024-07-26 01:16:48.796267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.543 [2024-07-26 01:16:48.796280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.543 [2024-07-26 01:16:48.796309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-26 01:16:48.806134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.543 [2024-07-26 01:16:48.806257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.543 [2024-07-26 01:16:48.806287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.543 [2024-07-26 01:16:48.806303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.543 [2024-07-26 01:16:48.806316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.543 [2024-07-26 01:16:48.806345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-26 01:16:48.816172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.543 [2024-07-26 01:16:48.816288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.543 [2024-07-26 01:16:48.816314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.543 [2024-07-26 01:16:48.816329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.543 [2024-07-26 01:16:48.816342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.543 [2024-07-26 01:16:48.816371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-26 01:16:48.826185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.543 [2024-07-26 01:16:48.826294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.543 [2024-07-26 01:16:48.826320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.543 [2024-07-26 01:16:48.826335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.543 [2024-07-26 01:16:48.826348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.543 [2024-07-26 01:16:48.826377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-26 01:16:48.836186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.543 [2024-07-26 01:16:48.836330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.543 [2024-07-26 01:16:48.836356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.543 [2024-07-26 01:16:48.836371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.543 [2024-07-26 01:16:48.836384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.543 [2024-07-26 01:16:48.836413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-26 01:16:48.846269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.543 [2024-07-26 01:16:48.846382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.543 [2024-07-26 01:16:48.846409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.543 [2024-07-26 01:16:48.846423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.543 [2024-07-26 01:16:48.846436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.543 [2024-07-26 01:16:48.846474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-26 01:16:48.856241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.543 [2024-07-26 01:16:48.856354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.543 [2024-07-26 01:16:48.856380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.543 [2024-07-26 01:16:48.856395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.543 [2024-07-26 01:16:48.856407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.543 [2024-07-26 01:16:48.856437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-26 01:16:48.866400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.543 [2024-07-26 01:16:48.866538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.543 [2024-07-26 01:16:48.866565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.543 [2024-07-26 01:16:48.866579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.543 [2024-07-26 01:16:48.866591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.543 [2024-07-26 01:16:48.866619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-26 01:16:48.876302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.543 [2024-07-26 01:16:48.876410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.543 [2024-07-26 01:16:48.876436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.544 [2024-07-26 01:16:48.876453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.544 [2024-07-26 01:16:48.876467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.544 [2024-07-26 01:16:48.876497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-26 01:16:48.886351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.544 [2024-07-26 01:16:48.886467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.544 [2024-07-26 01:16:48.886494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.544 [2024-07-26 01:16:48.886509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.544 [2024-07-26 01:16:48.886525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.544 [2024-07-26 01:16:48.886557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-26 01:16:48.896350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.544 [2024-07-26 01:16:48.896456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.544 [2024-07-26 01:16:48.896487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.544 [2024-07-26 01:16:48.896503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.544 [2024-07-26 01:16:48.896516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.544 [2024-07-26 01:16:48.896545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-26 01:16:48.906395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.544 [2024-07-26 01:16:48.906502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.544 [2024-07-26 01:16:48.906528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.544 [2024-07-26 01:16:48.906543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.544 [2024-07-26 01:16:48.906556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.544 [2024-07-26 01:16:48.906588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-26 01:16:48.916393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.544 [2024-07-26 01:16:48.916497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.544 [2024-07-26 01:16:48.916521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.544 [2024-07-26 01:16:48.916535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.544 [2024-07-26 01:16:48.916548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.544 [2024-07-26 01:16:48.916576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-26 01:16:48.926450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.544 [2024-07-26 01:16:48.926569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.544 [2024-07-26 01:16:48.926595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.544 [2024-07-26 01:16:48.926609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.544 [2024-07-26 01:16:48.926621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.544 [2024-07-26 01:16:48.926651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-26 01:16:48.936502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.544 [2024-07-26 01:16:48.936619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.544 [2024-07-26 01:16:48.936645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.544 [2024-07-26 01:16:48.936659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.544 [2024-07-26 01:16:48.936677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.544 [2024-07-26 01:16:48.936707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-26 01:16:48.946492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.544 [2024-07-26 01:16:48.946615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.544 [2024-07-26 01:16:48.946640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.544 [2024-07-26 01:16:48.946654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.544 [2024-07-26 01:16:48.946668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.544 [2024-07-26 01:16:48.946697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-26 01:16:48.956555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.544 [2024-07-26 01:16:48.956658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.544 [2024-07-26 01:16:48.956684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.544 [2024-07-26 01:16:48.956698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.544 [2024-07-26 01:16:48.956711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.544 [2024-07-26 01:16:48.956741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-26 01:16:48.966575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.544 [2024-07-26 01:16:48.966690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.544 [2024-07-26 01:16:48.966715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.544 [2024-07-26 01:16:48.966730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.544 [2024-07-26 01:16:48.966743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.544 [2024-07-26 01:16:48.966772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.803 [2024-07-26 01:16:48.976598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.803 [2024-07-26 01:16:48.976729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.803 [2024-07-26 01:16:48.976755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.803 [2024-07-26 01:16:48.976770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.803 [2024-07-26 01:16:48.976784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.803 [2024-07-26 01:16:48.976814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.803 qpair failed and we were unable to recover it. 00:34:18.803 [2024-07-26 01:16:48.986605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.803 [2024-07-26 01:16:48.986715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.803 [2024-07-26 01:16:48.986741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.803 [2024-07-26 01:16:48.986755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.803 [2024-07-26 01:16:48.986768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.803 [2024-07-26 01:16:48.986810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.803 qpair failed and we were unable to recover it. 00:34:18.803 [2024-07-26 01:16:48.996746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.803 [2024-07-26 01:16:48.996882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.803 [2024-07-26 01:16:48.996908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.803 [2024-07-26 01:16:48.996922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.803 [2024-07-26 01:16:48.996935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.803 [2024-07-26 01:16:48.996964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.803 qpair failed and we were unable to recover it. 00:34:18.803 [2024-07-26 01:16:49.006728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.803 [2024-07-26 01:16:49.006858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.803 [2024-07-26 01:16:49.006884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.803 [2024-07-26 01:16:49.006898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.803 [2024-07-26 01:16:49.006911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.803 [2024-07-26 01:16:49.006940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.803 qpair failed and we were unable to recover it. 00:34:18.803 [2024-07-26 01:16:49.016812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.803 [2024-07-26 01:16:49.016932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.803 [2024-07-26 01:16:49.016957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.803 [2024-07-26 01:16:49.016972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.803 [2024-07-26 01:16:49.016984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.803 [2024-07-26 01:16:49.017014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.803 qpair failed and we were unable to recover it. 00:34:18.803 [2024-07-26 01:16:49.026769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.803 [2024-07-26 01:16:49.026882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.803 [2024-07-26 01:16:49.026908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.803 [2024-07-26 01:16:49.026922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.803 [2024-07-26 01:16:49.026940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.803 [2024-07-26 01:16:49.026970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.803 qpair failed and we were unable to recover it. 00:34:18.803 [2024-07-26 01:16:49.036749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.803 [2024-07-26 01:16:49.036858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.803 [2024-07-26 01:16:49.036884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.803 [2024-07-26 01:16:49.036899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.803 [2024-07-26 01:16:49.036911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.803 [2024-07-26 01:16:49.036940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.803 qpair failed and we were unable to recover it. 00:34:18.803 [2024-07-26 01:16:49.046916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.803 [2024-07-26 01:16:49.047044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.803 [2024-07-26 01:16:49.047077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.803 [2024-07-26 01:16:49.047092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.803 [2024-07-26 01:16:49.047106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.804 [2024-07-26 01:16:49.047135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.804 qpair failed and we were unable to recover it. 00:34:18.804 [2024-07-26 01:16:49.056813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.804 [2024-07-26 01:16:49.056921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.804 [2024-07-26 01:16:49.056947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.804 [2024-07-26 01:16:49.056962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.804 [2024-07-26 01:16:49.056974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.804 [2024-07-26 01:16:49.057003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.804 qpair failed and we were unable to recover it. 00:34:18.804 [2024-07-26 01:16:49.066899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.804 [2024-07-26 01:16:49.067006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.804 [2024-07-26 01:16:49.067032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.804 [2024-07-26 01:16:49.067046] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.804 [2024-07-26 01:16:49.067065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.804 [2024-07-26 01:16:49.067097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.804 qpair failed and we were unable to recover it. 00:34:18.804 [2024-07-26 01:16:49.076862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.804 [2024-07-26 01:16:49.076966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.804 [2024-07-26 01:16:49.076992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.804 [2024-07-26 01:16:49.077007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.804 [2024-07-26 01:16:49.077021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.804 [2024-07-26 01:16:49.077072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.804 qpair failed and we were unable to recover it. 00:34:18.804 [2024-07-26 01:16:49.086893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.804 [2024-07-26 01:16:49.087002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.804 [2024-07-26 01:16:49.087028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.804 [2024-07-26 01:16:49.087042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.804 [2024-07-26 01:16:49.087055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.804 [2024-07-26 01:16:49.087092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.804 qpair failed and we were unable to recover it. 00:34:18.804 [2024-07-26 01:16:49.096916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.804 [2024-07-26 01:16:49.097038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.804 [2024-07-26 01:16:49.097081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.804 [2024-07-26 01:16:49.097097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.804 [2024-07-26 01:16:49.097111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.804 [2024-07-26 01:16:49.097141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.804 qpair failed and we were unable to recover it. 00:34:18.804 [2024-07-26 01:16:49.107047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.804 [2024-07-26 01:16:49.107163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.804 [2024-07-26 01:16:49.107189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.804 [2024-07-26 01:16:49.107203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.804 [2024-07-26 01:16:49.107216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.804 [2024-07-26 01:16:49.107245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.804 qpair failed and we were unable to recover it. 00:34:18.804 [2024-07-26 01:16:49.117035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.804 [2024-07-26 01:16:49.117186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.804 [2024-07-26 01:16:49.117212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.804 [2024-07-26 01:16:49.117233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.804 [2024-07-26 01:16:49.117247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.804 [2024-07-26 01:16:49.117290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.804 qpair failed and we were unable to recover it. 00:34:18.804 [2024-07-26 01:16:49.127032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.804 [2024-07-26 01:16:49.127154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.804 [2024-07-26 01:16:49.127181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.804 [2024-07-26 01:16:49.127195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.804 [2024-07-26 01:16:49.127209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.804 [2024-07-26 01:16:49.127238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.804 qpair failed and we were unable to recover it. 00:34:18.804 [2024-07-26 01:16:49.137041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.804 [2024-07-26 01:16:49.137154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.804 [2024-07-26 01:16:49.137181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.804 [2024-07-26 01:16:49.137196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.804 [2024-07-26 01:16:49.137209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.804 [2024-07-26 01:16:49.137253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.804 qpair failed and we were unable to recover it. 00:34:18.804 [2024-07-26 01:16:49.147182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.804 [2024-07-26 01:16:49.147319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.804 [2024-07-26 01:16:49.147345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.804 [2024-07-26 01:16:49.147359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.804 [2024-07-26 01:16:49.147372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.804 [2024-07-26 01:16:49.147402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.804 qpair failed and we were unable to recover it. 00:34:18.804 [2024-07-26 01:16:49.157189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.804 [2024-07-26 01:16:49.157307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.804 [2024-07-26 01:16:49.157333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.804 [2024-07-26 01:16:49.157348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.804 [2024-07-26 01:16:49.157361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.804 [2024-07-26 01:16:49.157390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.804 qpair failed and we were unable to recover it. 00:34:18.804 [2024-07-26 01:16:49.167212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.804 [2024-07-26 01:16:49.167321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.804 [2024-07-26 01:16:49.167347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.804 [2024-07-26 01:16:49.167361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.804 [2024-07-26 01:16:49.167374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.804 [2024-07-26 01:16:49.167403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.804 qpair failed and we were unable to recover it. 00:34:18.804 [2024-07-26 01:16:49.177231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.804 [2024-07-26 01:16:49.177344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.804 [2024-07-26 01:16:49.177370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.804 [2024-07-26 01:16:49.177384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.804 [2024-07-26 01:16:49.177397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.804 [2024-07-26 01:16:49.177426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.804 qpair failed and we were unable to recover it. 00:34:18.805 [2024-07-26 01:16:49.187215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.805 [2024-07-26 01:16:49.187324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.805 [2024-07-26 01:16:49.187350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.805 [2024-07-26 01:16:49.187365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.805 [2024-07-26 01:16:49.187378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.805 [2024-07-26 01:16:49.187409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.805 qpair failed and we were unable to recover it. 00:34:18.805 [2024-07-26 01:16:49.197231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.805 [2024-07-26 01:16:49.197339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.805 [2024-07-26 01:16:49.197370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.805 [2024-07-26 01:16:49.197385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.805 [2024-07-26 01:16:49.197398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.805 [2024-07-26 01:16:49.197427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.805 qpair failed and we were unable to recover it. 00:34:18.805 [2024-07-26 01:16:49.207337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.805 [2024-07-26 01:16:49.207458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.805 [2024-07-26 01:16:49.207489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.805 [2024-07-26 01:16:49.207505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.805 [2024-07-26 01:16:49.207518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.805 [2024-07-26 01:16:49.207547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.805 qpair failed and we were unable to recover it. 00:34:18.805 [2024-07-26 01:16:49.217276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.805 [2024-07-26 01:16:49.217388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.805 [2024-07-26 01:16:49.217413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.805 [2024-07-26 01:16:49.217428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.805 [2024-07-26 01:16:49.217441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.805 [2024-07-26 01:16:49.217471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.805 qpair failed and we were unable to recover it. 00:34:18.805 [2024-07-26 01:16:49.227306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.805 [2024-07-26 01:16:49.227452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.805 [2024-07-26 01:16:49.227479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.805 [2024-07-26 01:16:49.227493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.805 [2024-07-26 01:16:49.227507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:18.805 [2024-07-26 01:16:49.227536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.805 qpair failed and we were unable to recover it. 00:34:19.064 [2024-07-26 01:16:49.237334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.064 [2024-07-26 01:16:49.237437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.064 [2024-07-26 01:16:49.237464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.064 [2024-07-26 01:16:49.237479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.064 [2024-07-26 01:16:49.237492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:19.064 [2024-07-26 01:16:49.237533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-07-26 01:16:49.247382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.064 [2024-07-26 01:16:49.247496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.064 [2024-07-26 01:16:49.247522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.064 [2024-07-26 01:16:49.247537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.064 [2024-07-26 01:16:49.247551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:19.064 [2024-07-26 01:16:49.247586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-07-26 01:16:49.257485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.064 [2024-07-26 01:16:49.257611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.064 [2024-07-26 01:16:49.257637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.064 [2024-07-26 01:16:49.257652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.064 [2024-07-26 01:16:49.257665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:19.064 [2024-07-26 01:16:49.257694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-07-26 01:16:49.267414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.064 [2024-07-26 01:16:49.267536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.064 [2024-07-26 01:16:49.267562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.064 [2024-07-26 01:16:49.267577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.064 [2024-07-26 01:16:49.267589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:19.064 [2024-07-26 01:16:49.267618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-07-26 01:16:49.277410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.064 [2024-07-26 01:16:49.277517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.064 [2024-07-26 01:16:49.277542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.064 [2024-07-26 01:16:49.277556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.064 [2024-07-26 01:16:49.277570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:19.064 [2024-07-26 01:16:49.277599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-07-26 01:16:49.287454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.064 [2024-07-26 01:16:49.287566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.064 [2024-07-26 01:16:49.287592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.064 [2024-07-26 01:16:49.287606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.064 [2024-07-26 01:16:49.287619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:19.064 [2024-07-26 01:16:49.287650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-07-26 01:16:49.297487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.064 [2024-07-26 01:16:49.297604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.064 [2024-07-26 01:16:49.297636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.064 [2024-07-26 01:16:49.297652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.064 [2024-07-26 01:16:49.297666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:19.064 [2024-07-26 01:16:49.297697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-07-26 01:16:49.307540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.064 [2024-07-26 01:16:49.307667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.064 [2024-07-26 01:16:49.307692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.064 [2024-07-26 01:16:49.307707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.064 [2024-07-26 01:16:49.307720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:19.064 [2024-07-26 01:16:49.307750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-07-26 01:16:49.317543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.064 [2024-07-26 01:16:49.317653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.064 [2024-07-26 01:16:49.317680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.064 [2024-07-26 01:16:49.317695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.064 [2024-07-26 01:16:49.317708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:19.064 [2024-07-26 01:16:49.317751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-07-26 01:16:49.327633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.064 [2024-07-26 01:16:49.327746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.064 [2024-07-26 01:16:49.327772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.064 [2024-07-26 01:16:49.327787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.064 [2024-07-26 01:16:49.327800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:19.064 [2024-07-26 01:16:49.327829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-07-26 01:16:49.337634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.064 [2024-07-26 01:16:49.337741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.064 [2024-07-26 01:16:49.337766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.064 [2024-07-26 01:16:49.337781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.064 [2024-07-26 01:16:49.337794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:19.064 [2024-07-26 01:16:49.337829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-07-26 01:16:49.347648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.064 [2024-07-26 01:16:49.347752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.064 [2024-07-26 01:16:49.347778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.064 [2024-07-26 01:16:49.347792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.064 [2024-07-26 01:16:49.347806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:19.064 [2024-07-26 01:16:49.347834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.064 [2024-07-26 01:16:49.357704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.064 [2024-07-26 01:16:49.357864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.064 [2024-07-26 01:16:49.357891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.064 [2024-07-26 01:16:49.357906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.064 [2024-07-26 01:16:49.357922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:19.064 [2024-07-26 01:16:49.357951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:19.064 qpair failed and we were unable to recover it. 00:34:19.065 [2024-07-26 01:16:49.367715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.065 [2024-07-26 01:16:49.367831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.065 [2024-07-26 01:16:49.367858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.065 [2024-07-26 01:16:49.367872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.065 [2024-07-26 01:16:49.367885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:19.065 [2024-07-26 01:16:49.367916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:19.065 qpair failed and we were unable to recover it. 00:34:19.065 [2024-07-26 01:16:49.377851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.065 [2024-07-26 01:16:49.377960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.065 [2024-07-26 01:16:49.377989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.065 [2024-07-26 01:16:49.378004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.065 [2024-07-26 01:16:49.378018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:19.065 [2024-07-26 01:16:49.378047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:19.065 qpair failed and we were unable to recover it. 00:34:19.065 [2024-07-26 01:16:49.387763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.065 [2024-07-26 01:16:49.387921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.065 [2024-07-26 01:16:49.387948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.065 [2024-07-26 01:16:49.387962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.065 [2024-07-26 01:16:49.387975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:19.065 [2024-07-26 01:16:49.388004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:19.065 qpair failed and we were unable to recover it. 00:34:19.065 [2024-07-26 01:16:49.397760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.065 [2024-07-26 01:16:49.397867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.065 [2024-07-26 01:16:49.397893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.065 [2024-07-26 01:16:49.397908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.065 [2024-07-26 01:16:49.397921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:19.065 [2024-07-26 01:16:49.397950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:19.065 qpair failed and we were unable to recover it. 00:34:19.065 [2024-07-26 01:16:49.407845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.065 [2024-07-26 01:16:49.407958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.065 [2024-07-26 01:16:49.407993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.065 [2024-07-26 01:16:49.408008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.065 [2024-07-26 01:16:49.408021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:19.065 [2024-07-26 01:16:49.408052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:19.065 qpair failed and we were unable to recover it. 00:34:19.065 [2024-07-26 01:16:49.417833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.065 [2024-07-26 01:16:49.417983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.065 [2024-07-26 01:16:49.418009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.065 [2024-07-26 01:16:49.418023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.065 [2024-07-26 01:16:49.418036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:19.065 [2024-07-26 01:16:49.418086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:19.065 qpair failed and we were unable to recover it. 00:34:19.065 [2024-07-26 01:16:49.427846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.065 [2024-07-26 01:16:49.427950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.065 [2024-07-26 01:16:49.427977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.065 [2024-07-26 01:16:49.427991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.065 [2024-07-26 01:16:49.428009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:19.065 [2024-07-26 01:16:49.428039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:19.065 qpair failed and we were unable to recover it. 00:34:19.065 [2024-07-26 01:16:49.437959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.065 [2024-07-26 01:16:49.438073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.065 [2024-07-26 01:16:49.438099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.065 [2024-07-26 01:16:49.438114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.065 [2024-07-26 01:16:49.438127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:19.065 [2024-07-26 01:16:49.438157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:19.065 qpair failed and we were unable to recover it. 00:34:19.065 [2024-07-26 01:16:49.447932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.065 [2024-07-26 01:16:49.448099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.065 [2024-07-26 01:16:49.448125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.065 [2024-07-26 01:16:49.448139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.065 [2024-07-26 01:16:49.448153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:19.065 [2024-07-26 01:16:49.448196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:19.065 qpair failed and we were unable to recover it. 00:34:19.065 [2024-07-26 01:16:49.457966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.065 [2024-07-26 01:16:49.458080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.065 [2024-07-26 01:16:49.458107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.065 [2024-07-26 01:16:49.458122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.065 [2024-07-26 01:16:49.458135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:19.065 [2024-07-26 01:16:49.458164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:19.065 qpair failed and we were unable to recover it. 00:34:19.065 [2024-07-26 01:16:49.467964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:19.065 [2024-07-26 01:16:49.468091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:19.065 [2024-07-26 01:16:49.468118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:19.065 [2024-07-26 01:16:49.468133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:19.065 [2024-07-26 01:16:49.468146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fba68000b90 00:34:19.065 [2024-07-26 01:16:49.468176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:19.065 qpair failed and we were unable to recover it. 00:34:19.065 [2024-07-26 01:16:49.468211] nvme_ctrlr.c:4480:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:34:19.065 A controller has encountered a failure and is being reset. 00:34:19.065 Controller properly reset. 00:34:20.963 Initializing NVMe Controllers 00:34:20.963 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:20.963 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:20.963 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:20.963 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:20.963 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:20.964 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:20.964 Initialization complete. Launching workers. 00:34:20.964 Starting thread on core 1 00:34:20.964 Starting thread on core 2 00:34:20.964 Starting thread on core 3 00:34:20.964 Starting thread on core 0 00:34:20.964 01:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:34:20.964 00:34:20.964 real 0m10.722s 00:34:20.964 user 0m22.503s 00:34:20.964 sys 0m5.848s 00:34:20.964 01:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:20.964 01:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:20.964 ************************************ 00:34:20.964 END TEST nvmf_target_disconnect_tc2 00:34:20.964 ************************************ 00:34:20.964 01:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:34:20.964 01:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:34:20.964 01:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:34:20.964 01:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:20.964 01:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:34:20.964 01:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:20.964 01:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:34:20.964 01:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:20.964 01:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:20.964 rmmod nvme_tcp 00:34:20.964 rmmod nvme_fabrics 00:34:20.964 rmmod nvme_keyring 00:34:20.964 01:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:20.964 01:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:34:20.964 01:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:34:20.964 01:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1984507 ']' 00:34:20.964 01:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1984507 00:34:20.964 01:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1984507 ']' 00:34:20.964 01:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 1984507 00:34:20.964 01:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:34:20.964 01:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:20.964 01:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1984507 00:34:20.964 01:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:34:20.964 01:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:34:20.964 01:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1984507' 00:34:20.964 killing process with pid 1984507 00:34:20.964 01:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 1984507 00:34:20.964 01:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 1984507 00:34:21.222 01:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:21.222 01:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:21.222 01:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:21.222 01:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:21.222 01:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:21.222 01:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:21.222 01:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:21.222 01:16:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:23.130 01:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:23.130 00:34:23.130 real 0m15.330s 00:34:23.130 user 0m48.124s 00:34:23.130 sys 0m7.890s 00:34:23.130 01:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:23.130 01:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:23.130 ************************************ 00:34:23.130 END TEST nvmf_target_disconnect 00:34:23.130 ************************************ 00:34:23.130 01:16:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:23.130 00:34:23.130 real 6m30.207s 00:34:23.130 user 16m49.079s 00:34:23.130 sys 1m24.728s 00:34:23.130 01:16:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:23.130 01:16:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.130 ************************************ 00:34:23.130 END TEST nvmf_host 00:34:23.130 ************************************ 00:34:23.388 00:34:23.388 real 27m8.054s 00:34:23.388 user 74m2.005s 00:34:23.388 sys 6m25.004s 00:34:23.388 01:16:53 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:23.388 01:16:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:23.388 ************************************ 00:34:23.388 END TEST nvmf_tcp 00:34:23.388 ************************************ 00:34:23.388 01:16:53 -- spdk/autotest.sh@292 -- # [[ 0 -eq 0 ]] 00:34:23.388 01:16:53 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:23.388 01:16:53 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:23.388 01:16:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:23.388 01:16:53 -- common/autotest_common.sh@10 -- # set +x 00:34:23.388 ************************************ 00:34:23.388 START TEST spdkcli_nvmf_tcp 00:34:23.388 ************************************ 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:23.388 * Looking for test storage... 00:34:23.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1985609 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1985609 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 1985609 ']' 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:23.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:23.388 01:16:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:23.388 [2024-07-26 01:16:53.736674] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:34:23.388 [2024-07-26 01:16:53.736748] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1985609 ] 00:34:23.388 EAL: No free 2048 kB hugepages reported on node 1 00:34:23.388 [2024-07-26 01:16:53.792601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:23.645 [2024-07-26 01:16:53.880181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:23.645 [2024-07-26 01:16:53.880185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:23.645 01:16:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:23.645 01:16:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:34:23.645 01:16:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:23.645 01:16:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:23.645 01:16:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:23.645 01:16:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:23.645 01:16:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:23.645 01:16:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:23.645 01:16:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:23.645 01:16:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:23.645 01:16:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:23.645 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:23.645 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:23.645 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:23.645 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:23.645 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:23.645 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:23.645 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:23.645 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:23.645 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:23.645 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:23.645 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:23.645 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:23.645 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:23.645 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:23.645 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:23.645 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:23.645 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:23.645 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:23.645 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:23.645 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:23.646 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:23.646 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:23.646 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:23.646 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:23.646 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:23.646 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:23.646 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:23.646 ' 00:34:26.175 [2024-07-26 01:16:56.594976] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:27.546 [2024-07-26 01:16:57.823266] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:30.073 [2024-07-26 01:17:00.094468] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:31.971 [2024-07-26 01:17:02.036521] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:33.368 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:33.368 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:33.368 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:33.368 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:33.368 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:33.368 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:33.368 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:33.368 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:33.368 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:33.368 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:33.368 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:33.368 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:33.368 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:33.368 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:33.368 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:33.368 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:33.368 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:33.368 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:33.368 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:33.368 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:33.368 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:33.369 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:33.369 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:33.369 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:33.369 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:33.369 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:33.369 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:33.369 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:33.369 01:17:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:33.369 01:17:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:33.369 01:17:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:33.369 01:17:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:33.369 01:17:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:33.369 01:17:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:33.369 01:17:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:33.369 01:17:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:33.626 01:17:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:33.884 01:17:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:33.884 01:17:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:33.884 01:17:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:33.884 01:17:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:33.884 01:17:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:33.884 01:17:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:33.884 01:17:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:33.884 01:17:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:33.884 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:33.884 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:33.884 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:33.884 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:33.884 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:33.884 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:33.884 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:33.884 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:33.884 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:33.884 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:33.884 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:33.884 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:33.884 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:33.884 ' 00:34:39.144 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:39.144 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:39.144 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:39.144 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:39.144 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:39.144 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:39.144 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:39.144 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:39.144 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:39.144 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:39.144 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:39.144 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:39.144 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:39.144 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:39.144 01:17:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:39.144 01:17:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:39.144 01:17:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:39.144 01:17:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1985609 00:34:39.144 01:17:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1985609 ']' 00:34:39.144 01:17:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1985609 00:34:39.144 01:17:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:34:39.144 01:17:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:39.144 01:17:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1985609 00:34:39.144 01:17:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:39.144 01:17:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:39.144 01:17:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1985609' 00:34:39.144 killing process with pid 1985609 00:34:39.144 01:17:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 1985609 00:34:39.144 01:17:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 1985609 00:34:39.403 01:17:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:39.403 01:17:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:39.403 01:17:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1985609 ']' 00:34:39.404 01:17:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1985609 00:34:39.404 01:17:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1985609 ']' 00:34:39.404 01:17:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1985609 00:34:39.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1985609) - No such process 00:34:39.404 01:17:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 1985609 is not found' 00:34:39.404 Process with pid 1985609 is not found 00:34:39.404 01:17:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:39.404 01:17:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:39.404 01:17:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:39.404 00:34:39.404 real 0m16.011s 00:34:39.404 user 0m33.927s 00:34:39.404 sys 0m0.775s 00:34:39.404 01:17:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:39.404 01:17:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:39.404 ************************************ 00:34:39.404 END TEST spdkcli_nvmf_tcp 00:34:39.404 ************************************ 00:34:39.404 01:17:09 -- spdk/autotest.sh@294 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:39.404 01:17:09 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:39.404 01:17:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:39.404 01:17:09 -- common/autotest_common.sh@10 -- # set +x 00:34:39.404 ************************************ 00:34:39.404 START TEST nvmf_identify_passthru 00:34:39.404 ************************************ 00:34:39.404 01:17:09 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:39.404 * Looking for test storage... 00:34:39.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:39.404 01:17:09 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:39.404 01:17:09 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:39.404 01:17:09 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:39.404 01:17:09 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:39.404 01:17:09 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:39.404 01:17:09 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:39.404 01:17:09 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:39.404 01:17:09 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:39.404 01:17:09 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:39.404 01:17:09 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:39.404 01:17:09 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:39.404 01:17:09 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:39.404 01:17:09 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:39.404 01:17:09 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:39.404 01:17:09 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:39.404 01:17:09 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:39.404 01:17:09 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:39.404 01:17:09 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:39.404 01:17:09 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:39.404 01:17:09 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:39.404 01:17:09 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:39.404 01:17:09 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:39.404 01:17:09 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:39.404 01:17:09 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:39.404 01:17:09 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:39.404 01:17:09 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:39.404 01:17:09 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:39.404 01:17:09 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:34:39.404 01:17:09 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:39.404 01:17:09 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:39.404 01:17:09 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:39.404 01:17:09 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:39.404 01:17:09 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:39.404 01:17:09 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:39.404 01:17:09 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:39.404 01:17:09 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:39.404 01:17:09 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:39.404 01:17:09 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:39.404 01:17:09 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:39.404 01:17:09 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:39.404 01:17:09 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:39.404 01:17:09 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:39.404 01:17:09 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:39.404 01:17:09 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:39.404 01:17:09 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:39.404 01:17:09 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:39.404 01:17:09 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:39.404 01:17:09 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:39.404 01:17:09 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:39.404 01:17:09 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:39.404 01:17:09 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:39.404 01:17:09 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:39.404 01:17:09 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:39.404 01:17:09 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:39.404 01:17:09 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:39.404 01:17:09 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:39.404 01:17:09 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:34:39.404 01:17:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:41.306 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:41.306 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:41.306 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:41.306 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:41.306 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:41.564 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:41.564 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:41.564 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:41.564 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:41.564 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:41.564 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:41.564 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:41.564 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:41.564 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:41.564 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:41.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:41.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:34:41.564 00:34:41.564 --- 10.0.0.2 ping statistics --- 00:34:41.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:41.565 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:34:41.565 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:41.565 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:41.565 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:34:41.565 00:34:41.565 --- 10.0.0.1 ping statistics --- 00:34:41.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:41.565 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:34:41.565 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:41.565 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:34:41.565 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:41.565 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:41.565 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:41.565 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:41.565 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:41.565 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:41.565 01:17:11 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:41.565 01:17:11 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:41.565 01:17:11 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:41.565 01:17:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:41.565 01:17:11 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:41.565 01:17:11 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:34:41.565 01:17:11 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:34:41.565 01:17:11 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:34:41.565 01:17:11 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:34:41.565 01:17:11 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:34:41.565 01:17:11 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:34:41.565 01:17:11 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:41.565 01:17:11 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:41.565 01:17:11 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:34:41.565 01:17:11 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:34:41.565 01:17:11 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:34:41.565 01:17:11 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:34:41.565 01:17:11 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:34:41.565 01:17:11 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:34:41.565 01:17:11 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:34:41.565 01:17:11 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:41.565 01:17:11 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:41.824 EAL: No free 2048 kB hugepages reported on node 1 00:34:46.018 01:17:16 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:34:46.018 01:17:16 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:34:46.018 01:17:16 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:46.018 01:17:16 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:46.018 EAL: No free 2048 kB hugepages reported on node 1 00:34:50.208 01:17:20 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:34:50.208 01:17:20 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:50.208 01:17:20 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:50.208 01:17:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:50.208 01:17:20 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:50.208 01:17:20 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:50.208 01:17:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:50.208 01:17:20 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1990219 00:34:50.208 01:17:20 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:50.208 01:17:20 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:50.208 01:17:20 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1990219 00:34:50.208 01:17:20 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 1990219 ']' 00:34:50.208 01:17:20 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:50.208 01:17:20 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:50.208 01:17:20 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:50.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:50.208 01:17:20 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:50.208 01:17:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:50.208 [2024-07-26 01:17:20.472255] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:34:50.208 [2024-07-26 01:17:20.472355] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:50.208 EAL: No free 2048 kB hugepages reported on node 1 00:34:50.208 [2024-07-26 01:17:20.540677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:50.208 [2024-07-26 01:17:20.630380] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:50.208 [2024-07-26 01:17:20.630442] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:50.208 [2024-07-26 01:17:20.630470] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:50.208 [2024-07-26 01:17:20.630483] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:50.208 [2024-07-26 01:17:20.630493] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:50.208 [2024-07-26 01:17:20.630561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:50.208 [2024-07-26 01:17:20.630588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:50.208 [2024-07-26 01:17:20.630638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:50.208 [2024-07-26 01:17:20.630641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:50.465 01:17:20 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:50.465 01:17:20 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:34:50.465 01:17:20 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:50.465 01:17:20 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.465 01:17:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:50.465 INFO: Log level set to 20 00:34:50.465 INFO: Requests: 00:34:50.465 { 00:34:50.465 "jsonrpc": "2.0", 00:34:50.465 "method": "nvmf_set_config", 00:34:50.465 "id": 1, 00:34:50.465 "params": { 00:34:50.465 "admin_cmd_passthru": { 00:34:50.465 "identify_ctrlr": true 00:34:50.465 } 00:34:50.465 } 00:34:50.465 } 00:34:50.465 00:34:50.465 INFO: response: 00:34:50.465 { 00:34:50.465 "jsonrpc": "2.0", 00:34:50.465 "id": 1, 00:34:50.465 "result": true 00:34:50.465 } 00:34:50.465 00:34:50.465 01:17:20 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.465 01:17:20 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:50.465 01:17:20 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.465 01:17:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:50.465 INFO: Setting log level to 20 00:34:50.465 INFO: Setting log level to 20 00:34:50.465 INFO: Log level set to 20 00:34:50.465 INFO: Log level set to 20 00:34:50.465 INFO: Requests: 00:34:50.465 { 00:34:50.465 "jsonrpc": "2.0", 00:34:50.465 "method": "framework_start_init", 00:34:50.465 "id": 1 00:34:50.465 } 00:34:50.465 00:34:50.465 INFO: Requests: 00:34:50.465 { 00:34:50.465 "jsonrpc": "2.0", 00:34:50.465 "method": "framework_start_init", 00:34:50.465 "id": 1 00:34:50.465 } 00:34:50.465 00:34:50.465 [2024-07-26 01:17:20.801449] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:50.465 INFO: response: 00:34:50.465 { 00:34:50.465 "jsonrpc": "2.0", 00:34:50.465 "id": 1, 00:34:50.465 "result": true 00:34:50.465 } 00:34:50.465 00:34:50.465 INFO: response: 00:34:50.465 { 00:34:50.465 "jsonrpc": "2.0", 00:34:50.465 "id": 1, 00:34:50.465 "result": true 00:34:50.465 } 00:34:50.465 00:34:50.465 01:17:20 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.465 01:17:20 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:50.465 01:17:20 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.465 01:17:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:50.465 INFO: Setting log level to 40 00:34:50.465 INFO: Setting log level to 40 00:34:50.465 INFO: Setting log level to 40 00:34:50.465 [2024-07-26 01:17:20.811589] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:50.465 01:17:20 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.465 01:17:20 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:50.465 01:17:20 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:50.465 01:17:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:50.465 01:17:20 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:34:50.465 01:17:20 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.465 01:17:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:53.751 Nvme0n1 00:34:53.751 01:17:23 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.751 01:17:23 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:53.751 01:17:23 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.751 01:17:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:53.751 01:17:23 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.751 01:17:23 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:53.751 01:17:23 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.751 01:17:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:53.751 01:17:23 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.751 01:17:23 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:53.751 01:17:23 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.751 01:17:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:53.751 [2024-07-26 01:17:23.705295] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:53.751 01:17:23 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.751 01:17:23 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:53.751 01:17:23 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.751 01:17:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:53.751 [ 00:34:53.751 { 00:34:53.751 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:53.751 "subtype": "Discovery", 00:34:53.751 "listen_addresses": [], 00:34:53.751 "allow_any_host": true, 00:34:53.751 "hosts": [] 00:34:53.751 }, 00:34:53.751 { 00:34:53.751 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:53.751 "subtype": "NVMe", 00:34:53.751 "listen_addresses": [ 00:34:53.751 { 00:34:53.751 "trtype": "TCP", 00:34:53.751 "adrfam": "IPv4", 00:34:53.751 "traddr": "10.0.0.2", 00:34:53.751 "trsvcid": "4420" 00:34:53.751 } 00:34:53.751 ], 00:34:53.751 "allow_any_host": true, 00:34:53.751 "hosts": [], 00:34:53.751 "serial_number": "SPDK00000000000001", 00:34:53.751 "model_number": "SPDK bdev Controller", 00:34:53.751 "max_namespaces": 1, 00:34:53.751 "min_cntlid": 1, 00:34:53.751 "max_cntlid": 65519, 00:34:53.751 "namespaces": [ 00:34:53.751 { 00:34:53.751 "nsid": 1, 00:34:53.751 "bdev_name": "Nvme0n1", 00:34:53.751 "name": "Nvme0n1", 00:34:53.751 "nguid": "0C2AA6B50083496FBF0DF4064D1236B9", 00:34:53.751 "uuid": "0c2aa6b5-0083-496f-bf0d-f4064d1236b9" 00:34:53.751 } 00:34:53.751 ] 00:34:53.751 } 00:34:53.751 ] 00:34:53.751 01:17:23 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.751 01:17:23 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:53.751 01:17:23 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:53.751 01:17:23 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:53.751 EAL: No free 2048 kB hugepages reported on node 1 00:34:53.751 01:17:23 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:34:53.751 01:17:23 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:53.751 01:17:23 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:53.751 01:17:23 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:53.751 EAL: No free 2048 kB hugepages reported on node 1 00:34:53.751 01:17:24 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:34:53.751 01:17:24 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:34:53.751 01:17:24 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:34:53.751 01:17:24 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:53.751 01:17:24 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.751 01:17:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:53.751 01:17:24 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.751 01:17:24 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:53.751 01:17:24 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:53.751 01:17:24 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:53.751 01:17:24 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:34:53.751 01:17:24 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:53.751 01:17:24 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:34:53.751 01:17:24 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:53.751 01:17:24 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:53.751 rmmod nvme_tcp 00:34:53.751 rmmod nvme_fabrics 00:34:53.751 rmmod nvme_keyring 00:34:53.751 01:17:24 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:53.751 01:17:24 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:34:53.751 01:17:24 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:34:53.751 01:17:24 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1990219 ']' 00:34:53.751 01:17:24 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1990219 00:34:53.752 01:17:24 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 1990219 ']' 00:34:53.752 01:17:24 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 1990219 00:34:53.752 01:17:24 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:34:53.752 01:17:24 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:53.752 01:17:24 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1990219 00:34:53.752 01:17:24 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:53.752 01:17:24 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:53.752 01:17:24 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1990219' 00:34:53.752 killing process with pid 1990219 00:34:53.752 01:17:24 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 1990219 00:34:53.752 01:17:24 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 1990219 00:34:55.654 01:17:25 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:55.654 01:17:25 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:55.654 01:17:25 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:55.654 01:17:25 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:55.654 01:17:25 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:55.654 01:17:25 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:55.654 01:17:25 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:55.654 01:17:25 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:57.555 01:17:27 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:57.555 00:34:57.555 real 0m18.038s 00:34:57.555 user 0m26.604s 00:34:57.555 sys 0m2.326s 00:34:57.555 01:17:27 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:57.555 01:17:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:57.555 ************************************ 00:34:57.555 END TEST nvmf_identify_passthru 00:34:57.555 ************************************ 00:34:57.555 01:17:27 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:57.555 01:17:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:57.555 01:17:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:57.555 01:17:27 -- common/autotest_common.sh@10 -- # set +x 00:34:57.555 ************************************ 00:34:57.555 START TEST nvmf_dif 00:34:57.555 ************************************ 00:34:57.555 01:17:27 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:57.555 * Looking for test storage... 00:34:57.555 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:57.555 01:17:27 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:57.555 01:17:27 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:34:57.555 01:17:27 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:57.555 01:17:27 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:57.555 01:17:27 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:57.555 01:17:27 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:57.555 01:17:27 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:57.555 01:17:27 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:57.555 01:17:27 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:57.555 01:17:27 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:57.555 01:17:27 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:57.555 01:17:27 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:57.555 01:17:27 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:57.555 01:17:27 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:57.555 01:17:27 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:57.555 01:17:27 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:57.555 01:17:27 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:57.555 01:17:27 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:57.555 01:17:27 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:57.555 01:17:27 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:57.555 01:17:27 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:57.555 01:17:27 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:57.555 01:17:27 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:57.555 01:17:27 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:57.555 01:17:27 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:57.555 01:17:27 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:34:57.555 01:17:27 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:57.555 01:17:27 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:34:57.555 01:17:27 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:57.555 01:17:27 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:57.555 01:17:27 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:57.555 01:17:27 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:57.555 01:17:27 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:57.555 01:17:27 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:57.555 01:17:27 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:57.555 01:17:27 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:57.555 01:17:27 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:34:57.555 01:17:27 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:57.555 01:17:27 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:57.555 01:17:27 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:34:57.555 01:17:27 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:34:57.555 01:17:27 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:57.555 01:17:27 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:57.555 01:17:27 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:57.555 01:17:27 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:57.555 01:17:27 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:57.555 01:17:27 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:57.555 01:17:27 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:57.555 01:17:27 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:57.555 01:17:27 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:57.555 01:17:27 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:57.555 01:17:27 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:34:57.555 01:17:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:59.488 01:17:29 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:59.488 01:17:29 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:34:59.488 01:17:29 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:59.488 01:17:29 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:59.488 01:17:29 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:59.488 01:17:29 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:59.488 01:17:29 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:59.488 01:17:29 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:34:59.488 01:17:29 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:59.489 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:59.489 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:59.489 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:59.489 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:59.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:59.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:34:59.489 00:34:59.489 --- 10.0.0.2 ping statistics --- 00:34:59.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:59.489 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:59.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:59.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:34:59.489 00:34:59.489 --- 10.0.0.1 ping statistics --- 00:34:59.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:59.489 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:34:59.489 01:17:29 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:00.423 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:00.423 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:00.423 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:00.423 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:00.423 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:00.423 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:00.423 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:00.423 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:00.423 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:00.423 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:00.423 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:00.423 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:00.423 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:00.423 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:00.423 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:00.423 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:00.423 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:00.683 01:17:30 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:00.683 01:17:30 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:00.683 01:17:30 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:00.683 01:17:30 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:00.683 01:17:30 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:00.683 01:17:30 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:00.683 01:17:31 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:00.683 01:17:31 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:00.683 01:17:31 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:00.683 01:17:31 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:00.683 01:17:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:00.683 01:17:31 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1993376 00:35:00.683 01:17:31 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:00.683 01:17:31 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1993376 00:35:00.683 01:17:31 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 1993376 ']' 00:35:00.683 01:17:31 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:00.683 01:17:31 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:00.683 01:17:31 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:00.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:00.683 01:17:31 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:00.683 01:17:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:00.683 [2024-07-26 01:17:31.057530] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:35:00.683 [2024-07-26 01:17:31.057622] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:00.683 EAL: No free 2048 kB hugepages reported on node 1 00:35:00.942 [2024-07-26 01:17:31.122016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:00.942 [2024-07-26 01:17:31.210557] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:00.942 [2024-07-26 01:17:31.210626] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:00.942 [2024-07-26 01:17:31.210640] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:00.942 [2024-07-26 01:17:31.210651] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:00.942 [2024-07-26 01:17:31.210661] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:00.942 [2024-07-26 01:17:31.210701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:00.942 01:17:31 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:00.942 01:17:31 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:35:00.942 01:17:31 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:00.942 01:17:31 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:00.942 01:17:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:00.942 01:17:31 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:00.942 01:17:31 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:00.942 01:17:31 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:00.942 01:17:31 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.942 01:17:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:00.942 [2024-07-26 01:17:31.344329] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:00.942 01:17:31 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.942 01:17:31 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:00.942 01:17:31 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:00.942 01:17:31 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:00.942 01:17:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:01.201 ************************************ 00:35:01.201 START TEST fio_dif_1_default 00:35:01.201 ************************************ 00:35:01.201 01:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:35:01.201 01:17:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:01.201 01:17:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:01.201 01:17:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:01.201 01:17:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:01.201 01:17:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:01.201 01:17:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:01.201 01:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.201 01:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:01.201 bdev_null0 00:35:01.201 01:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.201 01:17:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:01.201 01:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.201 01:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:01.201 01:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.201 01:17:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:01.201 01:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.201 01:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:01.202 [2024-07-26 01:17:31.400618] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:01.202 { 00:35:01.202 "params": { 00:35:01.202 "name": "Nvme$subsystem", 00:35:01.202 "trtype": "$TEST_TRANSPORT", 00:35:01.202 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:01.202 "adrfam": "ipv4", 00:35:01.202 "trsvcid": "$NVMF_PORT", 00:35:01.202 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:01.202 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:01.202 "hdgst": ${hdgst:-false}, 00:35:01.202 "ddgst": ${ddgst:-false} 00:35:01.202 }, 00:35:01.202 "method": "bdev_nvme_attach_controller" 00:35:01.202 } 00:35:01.202 EOF 00:35:01.202 )") 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:01.202 "params": { 00:35:01.202 "name": "Nvme0", 00:35:01.202 "trtype": "tcp", 00:35:01.202 "traddr": "10.0.0.2", 00:35:01.202 "adrfam": "ipv4", 00:35:01.202 "trsvcid": "4420", 00:35:01.202 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:01.202 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:01.202 "hdgst": false, 00:35:01.202 "ddgst": false 00:35:01.202 }, 00:35:01.202 "method": "bdev_nvme_attach_controller" 00:35:01.202 }' 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:01.202 01:17:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:01.462 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:01.462 fio-3.35 00:35:01.462 Starting 1 thread 00:35:01.462 EAL: No free 2048 kB hugepages reported on node 1 00:35:13.679 00:35:13.679 filename0: (groupid=0, jobs=1): err= 0: pid=1993601: Fri Jul 26 01:17:42 2024 00:35:13.679 read: IOPS=188, BW=756KiB/s (774kB/s)(7584KiB/10033msec) 00:35:13.679 slat (nsec): min=4393, max=35400, avg=9426.03, stdev=2821.74 00:35:13.679 clat (usec): min=606, max=48803, avg=21136.93, stdev=20447.00 00:35:13.679 lat (usec): min=614, max=48816, avg=21146.36, stdev=20446.99 00:35:13.679 clat percentiles (usec): 00:35:13.679 | 1.00th=[ 627], 5.00th=[ 652], 10.00th=[ 676], 20.00th=[ 693], 00:35:13.679 | 30.00th=[ 709], 40.00th=[ 725], 50.00th=[ 922], 60.00th=[41157], 00:35:13.679 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:35:13.679 | 99.00th=[42206], 99.50th=[42206], 99.90th=[49021], 99.95th=[49021], 00:35:13.679 | 99.99th=[49021] 00:35:13.679 bw ( KiB/s): min= 672, max= 832, per=100.00%, avg=756.80, stdev=33.28, samples=20 00:35:13.679 iops : min= 168, max= 208, avg=189.20, stdev= 8.32, samples=20 00:35:13.679 lat (usec) : 750=46.31%, 1000=3.69% 00:35:13.679 lat (msec) : 50=50.00% 00:35:13.679 cpu : usr=89.56%, sys=9.77%, ctx=37, majf=0, minf=231 00:35:13.679 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:13.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.679 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:13.679 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:13.679 00:35:13.679 Run status group 0 (all jobs): 00:35:13.679 READ: bw=756KiB/s (774kB/s), 756KiB/s-756KiB/s (774kB/s-774kB/s), io=7584KiB (7766kB), run=10033-10033msec 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.679 00:35:13.679 real 0m11.047s 00:35:13.679 user 0m10.078s 00:35:13.679 sys 0m1.239s 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:13.679 ************************************ 00:35:13.679 END TEST fio_dif_1_default 00:35:13.679 ************************************ 00:35:13.679 01:17:42 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:13.679 01:17:42 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:13.679 01:17:42 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:13.679 01:17:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:13.679 ************************************ 00:35:13.679 START TEST fio_dif_1_multi_subsystems 00:35:13.679 ************************************ 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:13.679 bdev_null0 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:13.679 [2024-07-26 01:17:42.503919] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:13.679 bdev_null1 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:13.679 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:13.679 { 00:35:13.679 "params": { 00:35:13.679 "name": "Nvme$subsystem", 00:35:13.679 "trtype": "$TEST_TRANSPORT", 00:35:13.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:13.680 "adrfam": "ipv4", 00:35:13.680 "trsvcid": "$NVMF_PORT", 00:35:13.680 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:13.680 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:13.680 "hdgst": ${hdgst:-false}, 00:35:13.680 "ddgst": ${ddgst:-false} 00:35:13.680 }, 00:35:13.680 "method": "bdev_nvme_attach_controller" 00:35:13.680 } 00:35:13.680 EOF 00:35:13.680 )") 00:35:13.680 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:13.680 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:13.680 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:13.680 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:13.680 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:13.680 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:13.680 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:13.680 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:13.680 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:13.680 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:35:13.680 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:13.680 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:13.680 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:13.680 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:13.680 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:35:13.680 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:13.680 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:13.680 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:13.680 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:13.680 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:13.680 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:13.680 { 00:35:13.680 "params": { 00:35:13.680 "name": "Nvme$subsystem", 00:35:13.680 "trtype": "$TEST_TRANSPORT", 00:35:13.680 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:13.680 "adrfam": "ipv4", 00:35:13.680 "trsvcid": "$NVMF_PORT", 00:35:13.680 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:13.680 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:13.680 "hdgst": ${hdgst:-false}, 00:35:13.680 "ddgst": ${ddgst:-false} 00:35:13.680 }, 00:35:13.680 "method": "bdev_nvme_attach_controller" 00:35:13.680 } 00:35:13.680 EOF 00:35:13.680 )") 00:35:13.680 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:13.680 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:13.680 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:13.680 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:35:13.680 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:35:13.680 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:13.680 "params": { 00:35:13.680 "name": "Nvme0", 00:35:13.680 "trtype": "tcp", 00:35:13.680 "traddr": "10.0.0.2", 00:35:13.680 "adrfam": "ipv4", 00:35:13.680 "trsvcid": "4420", 00:35:13.680 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:13.680 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:13.680 "hdgst": false, 00:35:13.680 "ddgst": false 00:35:13.680 }, 00:35:13.680 "method": "bdev_nvme_attach_controller" 00:35:13.680 },{ 00:35:13.680 "params": { 00:35:13.680 "name": "Nvme1", 00:35:13.680 "trtype": "tcp", 00:35:13.680 "traddr": "10.0.0.2", 00:35:13.680 "adrfam": "ipv4", 00:35:13.680 "trsvcid": "4420", 00:35:13.680 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:13.680 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:13.680 "hdgst": false, 00:35:13.680 "ddgst": false 00:35:13.680 }, 00:35:13.680 "method": "bdev_nvme_attach_controller" 00:35:13.680 }' 00:35:13.680 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:13.680 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:13.680 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:13.680 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:13.680 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:13.680 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:13.680 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:13.680 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:13.680 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:13.680 01:17:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:13.680 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:13.680 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:13.680 fio-3.35 00:35:13.680 Starting 2 threads 00:35:13.680 EAL: No free 2048 kB hugepages reported on node 1 00:35:23.650 00:35:23.650 filename0: (groupid=0, jobs=1): err= 0: pid=1995002: Fri Jul 26 01:17:53 2024 00:35:23.650 read: IOPS=97, BW=389KiB/s (399kB/s)(3904KiB/10024msec) 00:35:23.650 slat (nsec): min=6813, max=19657, avg=9611.62, stdev=2261.55 00:35:23.650 clat (usec): min=40857, max=44670, avg=41051.16, stdev=335.34 00:35:23.650 lat (usec): min=40865, max=44686, avg=41060.77, stdev=335.53 00:35:23.650 clat percentiles (usec): 00:35:23.650 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:23.650 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:23.651 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:35:23.651 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:35:23.651 | 99.99th=[44827] 00:35:23.651 bw ( KiB/s): min= 384, max= 416, per=33.86%, avg=388.80, stdev=11.72, samples=20 00:35:23.651 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:35:23.651 lat (msec) : 50=100.00% 00:35:23.651 cpu : usr=94.56%, sys=5.15%, ctx=24, majf=0, minf=79 00:35:23.651 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:23.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.651 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.651 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:23.651 filename1: (groupid=0, jobs=1): err= 0: pid=1995003: Fri Jul 26 01:17:53 2024 00:35:23.651 read: IOPS=189, BW=758KiB/s (777kB/s)(7584KiB/10001msec) 00:35:23.651 slat (nsec): min=5717, max=31457, avg=9763.82, stdev=2582.47 00:35:23.651 clat (usec): min=648, max=44680, avg=21068.23, stdev=20197.08 00:35:23.651 lat (usec): min=656, max=44695, avg=21078.00, stdev=20197.10 00:35:23.651 clat percentiles (usec): 00:35:23.651 | 1.00th=[ 693], 5.00th=[ 709], 10.00th=[ 734], 20.00th=[ 783], 00:35:23.651 | 30.00th=[ 799], 40.00th=[ 832], 50.00th=[40633], 60.00th=[41157], 00:35:23.651 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:23.651 | 99.00th=[41157], 99.50th=[41681], 99.90th=[44827], 99.95th=[44827], 00:35:23.651 | 99.99th=[44827] 00:35:23.651 bw ( KiB/s): min= 672, max= 768, per=66.23%, avg=759.58, stdev=23.47, samples=19 00:35:23.651 iops : min= 168, max= 192, avg=189.89, stdev= 5.87, samples=19 00:35:23.651 lat (usec) : 750=10.71%, 1000=38.19% 00:35:23.651 lat (msec) : 2=0.90%, 50=50.21% 00:35:23.651 cpu : usr=94.13%, sys=5.58%, ctx=21, majf=0, minf=171 00:35:23.651 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:23.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.651 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.651 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:23.651 00:35:23.651 Run status group 0 (all jobs): 00:35:23.651 READ: bw=1146KiB/s (1174kB/s), 389KiB/s-758KiB/s (399kB/s-777kB/s), io=11.2MiB (11.8MB), run=10001-10024msec 00:35:23.651 01:17:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:23.651 01:17:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:23.651 01:17:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:23.651 01:17:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:23.651 01:17:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:23.651 01:17:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:23.651 01:17:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.651 01:17:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:23.651 01:17:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.651 01:17:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:23.651 01:17:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.651 01:17:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:23.651 01:17:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.651 01:17:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:23.651 01:17:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:23.651 01:17:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:23.651 01:17:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:23.651 01:17:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.651 01:17:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:23.651 01:17:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.651 01:17:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:23.651 01:17:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.651 01:17:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:23.651 01:17:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.651 00:35:23.651 real 0m11.385s 00:35:23.651 user 0m20.269s 00:35:23.651 sys 0m1.370s 00:35:23.651 01:17:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:23.651 01:17:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:23.651 ************************************ 00:35:23.651 END TEST fio_dif_1_multi_subsystems 00:35:23.651 ************************************ 00:35:23.651 01:17:53 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:23.651 01:17:53 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:23.651 01:17:53 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:23.651 01:17:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:23.651 ************************************ 00:35:23.651 START TEST fio_dif_rand_params 00:35:23.651 ************************************ 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:23.651 bdev_null0 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:23.651 [2024-07-26 01:17:53.936133] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:23.651 { 00:35:23.651 "params": { 00:35:23.651 "name": "Nvme$subsystem", 00:35:23.651 "trtype": "$TEST_TRANSPORT", 00:35:23.651 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:23.651 "adrfam": "ipv4", 00:35:23.651 "trsvcid": "$NVMF_PORT", 00:35:23.651 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:23.651 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:23.651 "hdgst": ${hdgst:-false}, 00:35:23.651 "ddgst": ${ddgst:-false} 00:35:23.651 }, 00:35:23.651 "method": "bdev_nvme_attach_controller" 00:35:23.651 } 00:35:23.651 EOF 00:35:23.651 )") 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:23.651 01:17:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:23.651 "params": { 00:35:23.652 "name": "Nvme0", 00:35:23.652 "trtype": "tcp", 00:35:23.652 "traddr": "10.0.0.2", 00:35:23.652 "adrfam": "ipv4", 00:35:23.652 "trsvcid": "4420", 00:35:23.652 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:23.652 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:23.652 "hdgst": false, 00:35:23.652 "ddgst": false 00:35:23.652 }, 00:35:23.652 "method": "bdev_nvme_attach_controller" 00:35:23.652 }' 00:35:23.652 01:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:23.652 01:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:23.652 01:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:23.652 01:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:23.652 01:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:23.652 01:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:23.652 01:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:23.652 01:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:23.652 01:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:23.652 01:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:23.911 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:23.911 ... 00:35:23.911 fio-3.35 00:35:23.911 Starting 3 threads 00:35:23.911 EAL: No free 2048 kB hugepages reported on node 1 00:35:30.475 00:35:30.475 filename0: (groupid=0, jobs=1): err= 0: pid=1996395: Fri Jul 26 01:17:59 2024 00:35:30.475 read: IOPS=229, BW=28.7MiB/s (30.1MB/s)(144MiB/5005msec) 00:35:30.475 slat (nsec): min=5051, max=35244, avg=14267.39, stdev=3284.48 00:35:30.475 clat (usec): min=5575, max=90648, avg=13025.51, stdev=10135.00 00:35:30.475 lat (usec): min=5589, max=90657, avg=13039.77, stdev=10134.93 00:35:30.475 clat percentiles (usec): 00:35:30.475 | 1.00th=[ 6063], 5.00th=[ 6456], 10.00th=[ 7504], 20.00th=[ 8586], 00:35:30.475 | 30.00th=[ 9110], 40.00th=[ 9765], 50.00th=[10421], 60.00th=[11469], 00:35:30.475 | 70.00th=[12387], 80.00th=[13698], 90.00th=[15926], 95.00th=[47973], 00:35:30.475 | 99.00th=[52691], 99.50th=[54264], 99.90th=[90702], 99.95th=[90702], 00:35:30.475 | 99.99th=[90702] 00:35:30.475 bw ( KiB/s): min=22784, max=35840, per=34.97%, avg=29394.00, stdev=4915.70, samples=10 00:35:30.475 iops : min= 178, max= 280, avg=229.60, stdev=38.43, samples=10 00:35:30.475 lat (msec) : 10=43.96%, 20=50.56%, 50=1.56%, 100=3.91% 00:35:30.475 cpu : usr=91.99%, sys=7.49%, ctx=11, majf=0, minf=66 00:35:30.475 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:30.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:30.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:30.475 issued rwts: total=1151,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:30.475 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:30.475 filename0: (groupid=0, jobs=1): err= 0: pid=1996396: Fri Jul 26 01:17:59 2024 00:35:30.475 read: IOPS=231, BW=29.0MiB/s (30.4MB/s)(146MiB/5046msec) 00:35:30.475 slat (nsec): min=4975, max=42330, avg=15948.52, stdev=3370.48 00:35:30.475 clat (usec): min=4799, max=54900, avg=12880.25, stdev=11404.02 00:35:30.475 lat (usec): min=4812, max=54916, avg=12896.20, stdev=11404.12 00:35:30.475 clat percentiles (usec): 00:35:30.475 | 1.00th=[ 5014], 5.00th=[ 5473], 10.00th=[ 6194], 20.00th=[ 7635], 00:35:30.475 | 30.00th=[ 8160], 40.00th=[ 8586], 50.00th=[ 9372], 60.00th=[10945], 00:35:30.475 | 70.00th=[11994], 80.00th=[12780], 90.00th=[14222], 95.00th=[49021], 00:35:30.475 | 99.00th=[53216], 99.50th=[53740], 99.90th=[54789], 99.95th=[54789], 00:35:30.475 | 99.99th=[54789] 00:35:30.475 bw ( KiB/s): min=15616, max=38400, per=35.55%, avg=29881.00, stdev=7489.39, samples=10 00:35:30.475 iops : min= 122, max= 300, avg=233.40, stdev=58.52, samples=10 00:35:30.475 lat (msec) : 10=54.87%, 20=36.92%, 50=4.36%, 100=3.85% 00:35:30.475 cpu : usr=92.39%, sys=6.96%, ctx=5, majf=0, minf=120 00:35:30.475 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:30.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:30.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:30.475 issued rwts: total=1170,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:30.475 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:30.475 filename0: (groupid=0, jobs=1): err= 0: pid=1996397: Fri Jul 26 01:17:59 2024 00:35:30.475 read: IOPS=198, BW=24.8MiB/s (26.0MB/s)(124MiB/5004msec) 00:35:30.475 slat (nsec): min=4779, max=29168, avg=13452.60, stdev=2338.62 00:35:30.475 clat (usec): min=4783, max=94029, avg=15096.66, stdev=14488.47 00:35:30.475 lat (usec): min=4796, max=94042, avg=15110.11, stdev=14488.40 00:35:30.475 clat percentiles (usec): 00:35:30.475 | 1.00th=[ 5473], 5.00th=[ 6849], 10.00th=[ 7504], 20.00th=[ 8160], 00:35:30.475 | 30.00th=[ 8586], 40.00th=[ 9241], 50.00th=[10552], 60.00th=[11207], 00:35:30.475 | 70.00th=[11863], 80.00th=[12387], 90.00th=[48497], 95.00th=[51119], 00:35:30.475 | 99.00th=[53216], 99.50th=[90702], 99.90th=[93848], 99.95th=[93848], 00:35:30.475 | 99.99th=[93848] 00:35:30.475 bw ( KiB/s): min=19456, max=32768, per=30.15%, avg=25344.00, stdev=4412.65, samples=10 00:35:30.475 iops : min= 152, max= 256, avg=198.00, stdev=34.47, samples=10 00:35:30.475 lat (msec) : 10=44.71%, 20=42.90%, 50=4.93%, 100=7.45% 00:35:30.475 cpu : usr=93.44%, sys=6.02%, ctx=7, majf=0, minf=57 00:35:30.475 IO depths : 1=1.4%, 2=98.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:30.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:30.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:30.475 issued rwts: total=993,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:30.475 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:30.475 00:35:30.475 Run status group 0 (all jobs): 00:35:30.475 READ: bw=82.1MiB/s (86.1MB/s), 24.8MiB/s-29.0MiB/s (26.0MB/s-30.4MB/s), io=414MiB (434MB), run=5004-5046msec 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:30.475 bdev_null0 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:30.475 [2024-07-26 01:18:00.094907] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:30.475 bdev_null1 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:30.475 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:30.476 bdev_null2 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:30.476 { 00:35:30.476 "params": { 00:35:30.476 "name": "Nvme$subsystem", 00:35:30.476 "trtype": "$TEST_TRANSPORT", 00:35:30.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:30.476 "adrfam": "ipv4", 00:35:30.476 "trsvcid": "$NVMF_PORT", 00:35:30.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:30.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:30.476 "hdgst": ${hdgst:-false}, 00:35:30.476 "ddgst": ${ddgst:-false} 00:35:30.476 }, 00:35:30.476 "method": "bdev_nvme_attach_controller" 00:35:30.476 } 00:35:30.476 EOF 00:35:30.476 )") 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:30.476 { 00:35:30.476 "params": { 00:35:30.476 "name": "Nvme$subsystem", 00:35:30.476 "trtype": "$TEST_TRANSPORT", 00:35:30.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:30.476 "adrfam": "ipv4", 00:35:30.476 "trsvcid": "$NVMF_PORT", 00:35:30.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:30.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:30.476 "hdgst": ${hdgst:-false}, 00:35:30.476 "ddgst": ${ddgst:-false} 00:35:30.476 }, 00:35:30.476 "method": "bdev_nvme_attach_controller" 00:35:30.476 } 00:35:30.476 EOF 00:35:30.476 )") 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:30.476 { 00:35:30.476 "params": { 00:35:30.476 "name": "Nvme$subsystem", 00:35:30.476 "trtype": "$TEST_TRANSPORT", 00:35:30.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:30.476 "adrfam": "ipv4", 00:35:30.476 "trsvcid": "$NVMF_PORT", 00:35:30.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:30.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:30.476 "hdgst": ${hdgst:-false}, 00:35:30.476 "ddgst": ${ddgst:-false} 00:35:30.476 }, 00:35:30.476 "method": "bdev_nvme_attach_controller" 00:35:30.476 } 00:35:30.476 EOF 00:35:30.476 )") 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:30.476 "params": { 00:35:30.476 "name": "Nvme0", 00:35:30.476 "trtype": "tcp", 00:35:30.476 "traddr": "10.0.0.2", 00:35:30.476 "adrfam": "ipv4", 00:35:30.476 "trsvcid": "4420", 00:35:30.476 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:30.476 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:30.476 "hdgst": false, 00:35:30.476 "ddgst": false 00:35:30.476 }, 00:35:30.476 "method": "bdev_nvme_attach_controller" 00:35:30.476 },{ 00:35:30.476 "params": { 00:35:30.476 "name": "Nvme1", 00:35:30.476 "trtype": "tcp", 00:35:30.476 "traddr": "10.0.0.2", 00:35:30.476 "adrfam": "ipv4", 00:35:30.476 "trsvcid": "4420", 00:35:30.476 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:30.476 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:30.476 "hdgst": false, 00:35:30.476 "ddgst": false 00:35:30.476 }, 00:35:30.476 "method": "bdev_nvme_attach_controller" 00:35:30.476 },{ 00:35:30.476 "params": { 00:35:30.476 "name": "Nvme2", 00:35:30.476 "trtype": "tcp", 00:35:30.476 "traddr": "10.0.0.2", 00:35:30.476 "adrfam": "ipv4", 00:35:30.476 "trsvcid": "4420", 00:35:30.476 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:30.476 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:30.476 "hdgst": false, 00:35:30.476 "ddgst": false 00:35:30.476 }, 00:35:30.476 "method": "bdev_nvme_attach_controller" 00:35:30.476 }' 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:30.476 01:18:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:30.476 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:30.476 ... 00:35:30.476 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:30.476 ... 00:35:30.477 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:30.477 ... 00:35:30.477 fio-3.35 00:35:30.477 Starting 24 threads 00:35:30.477 EAL: No free 2048 kB hugepages reported on node 1 00:35:42.684 00:35:42.684 filename0: (groupid=0, jobs=1): err= 0: pid=1997316: Fri Jul 26 01:18:11 2024 00:35:42.684 read: IOPS=59, BW=240KiB/s (246kB/s)(2432KiB/10136msec) 00:35:42.684 slat (usec): min=11, max=122, avg=57.50, stdev=24.62 00:35:42.684 clat (msec): min=162, max=317, avg=266.22, stdev=29.28 00:35:42.684 lat (msec): min=162, max=317, avg=266.28, stdev=29.29 00:35:42.684 clat percentiles (msec): 00:35:42.684 | 1.00th=[ 163], 5.00th=[ 171], 10.00th=[ 234], 20.00th=[ 259], 00:35:42.684 | 30.00th=[ 264], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 275], 00:35:42.684 | 70.00th=[ 275], 80.00th=[ 288], 90.00th=[ 292], 95.00th=[ 305], 00:35:42.684 | 99.00th=[ 317], 99.50th=[ 317], 99.90th=[ 317], 99.95th=[ 317], 00:35:42.684 | 99.99th=[ 317] 00:35:42.684 bw ( KiB/s): min= 128, max= 384, per=3.62%, avg=236.80, stdev=75.15, samples=20 00:35:42.684 iops : min= 32, max= 96, avg=59.20, stdev=18.79, samples=20 00:35:42.684 lat (msec) : 250=13.16%, 500=86.84% 00:35:42.684 cpu : usr=97.57%, sys=1.72%, ctx=41, majf=0, minf=9 00:35:42.684 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:42.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.684 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.684 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.684 filename0: (groupid=0, jobs=1): err= 0: pid=1997317: Fri Jul 26 01:18:11 2024 00:35:42.684 read: IOPS=66, BW=265KiB/s (271kB/s)(2688KiB/10156msec) 00:35:42.684 slat (usec): min=3, max=103, avg=41.03, stdev=23.53 00:35:42.684 clat (usec): min=1922, max=369406, avg=241382.70, stdev=78392.89 00:35:42.684 lat (usec): min=1935, max=369432, avg=241423.73, stdev=78398.27 00:35:42.684 clat percentiles (msec): 00:35:42.684 | 1.00th=[ 3], 5.00th=[ 41], 10.00th=[ 116], 20.00th=[ 236], 00:35:42.684 | 30.00th=[ 259], 40.00th=[ 264], 50.00th=[ 268], 60.00th=[ 271], 00:35:42.684 | 70.00th=[ 275], 80.00th=[ 288], 90.00th=[ 292], 95.00th=[ 305], 00:35:42.684 | 99.00th=[ 317], 99.50th=[ 317], 99.90th=[ 372], 99.95th=[ 372], 00:35:42.684 | 99.99th=[ 372] 00:35:42.684 bw ( KiB/s): min= 128, max= 768, per=4.02%, avg=262.40, stdev=127.09, samples=20 00:35:42.684 iops : min= 32, max= 192, avg=65.60, stdev=31.77, samples=20 00:35:42.684 lat (msec) : 2=0.89%, 4=3.87%, 50=2.38%, 100=2.38%, 250=12.20% 00:35:42.684 lat (msec) : 500=78.27% 00:35:42.684 cpu : usr=98.10%, sys=1.31%, ctx=44, majf=0, minf=9 00:35:42.684 IO depths : 1=5.8%, 2=11.8%, 4=23.8%, 8=51.8%, 16=6.8%, 32=0.0%, >=64=0.0% 00:35:42.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.684 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.684 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.684 filename0: (groupid=0, jobs=1): err= 0: pid=1997318: Fri Jul 26 01:18:11 2024 00:35:42.684 read: IOPS=58, BW=234KiB/s (240kB/s)(2368KiB/10121msec) 00:35:42.684 slat (usec): min=11, max=133, avg=68.91, stdev=18.84 00:35:42.684 clat (msec): min=140, max=492, avg=272.95, stdev=44.60 00:35:42.684 lat (msec): min=140, max=492, avg=273.02, stdev=44.59 00:35:42.684 clat percentiles (msec): 00:35:42.684 | 1.00th=[ 167], 5.00th=[ 171], 10.00th=[ 236], 20.00th=[ 257], 00:35:42.684 | 30.00th=[ 264], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 275], 00:35:42.684 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 305], 95.00th=[ 372], 00:35:42.684 | 99.00th=[ 397], 99.50th=[ 418], 99.90th=[ 493], 99.95th=[ 493], 00:35:42.684 | 99.99th=[ 493] 00:35:42.684 bw ( KiB/s): min= 128, max= 272, per=3.53%, avg=230.40, stdev=50.97, samples=20 00:35:42.684 iops : min= 32, max= 68, avg=57.60, stdev=12.74, samples=20 00:35:42.684 lat (msec) : 250=12.16%, 500=87.84% 00:35:42.684 cpu : usr=97.03%, sys=1.90%, ctx=35, majf=0, minf=9 00:35:42.684 IO depths : 1=3.5%, 2=9.8%, 4=25.0%, 8=52.7%, 16=9.0%, 32=0.0%, >=64=0.0% 00:35:42.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.684 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.684 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.684 filename0: (groupid=0, jobs=1): err= 0: pid=1997319: Fri Jul 26 01:18:11 2024 00:35:42.684 read: IOPS=59, BW=239KiB/s (245kB/s)(2424KiB/10128msec) 00:35:42.684 slat (usec): min=9, max=126, avg=55.47, stdev=23.19 00:35:42.684 clat (msec): min=165, max=404, avg=266.66, stdev=43.96 00:35:42.684 lat (msec): min=165, max=404, avg=266.72, stdev=43.96 00:35:42.684 clat percentiles (msec): 00:35:42.684 | 1.00th=[ 171], 5.00th=[ 171], 10.00th=[ 199], 20.00th=[ 253], 00:35:42.684 | 30.00th=[ 266], 40.00th=[ 266], 50.00th=[ 268], 60.00th=[ 271], 00:35:42.684 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 309], 95.00th=[ 338], 00:35:42.684 | 99.00th=[ 393], 99.50th=[ 397], 99.90th=[ 405], 99.95th=[ 405], 00:35:42.684 | 99.99th=[ 405] 00:35:42.684 bw ( KiB/s): min= 128, max= 384, per=3.60%, avg=236.00, stdev=60.95, samples=20 00:35:42.684 iops : min= 32, max= 96, avg=59.00, stdev=15.24, samples=20 00:35:42.684 lat (msec) : 250=17.49%, 500=82.51% 00:35:42.684 cpu : usr=95.68%, sys=2.38%, ctx=86, majf=0, minf=9 00:35:42.684 IO depths : 1=3.1%, 2=9.4%, 4=25.1%, 8=53.1%, 16=9.2%, 32=0.0%, >=64=0.0% 00:35:42.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.684 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.684 issued rwts: total=606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.684 filename0: (groupid=0, jobs=1): err= 0: pid=1997320: Fri Jul 26 01:18:11 2024 00:35:42.684 read: IOPS=58, BW=234KiB/s (240kB/s)(2368KiB/10121msec) 00:35:42.684 slat (nsec): min=9672, max=41943, avg=15556.86, stdev=4140.74 00:35:42.684 clat (msec): min=176, max=399, avg=273.27, stdev=31.96 00:35:42.684 lat (msec): min=176, max=399, avg=273.28, stdev=31.96 00:35:42.684 clat percentiles (msec): 00:35:42.684 | 1.00th=[ 188], 5.00th=[ 207], 10.00th=[ 239], 20.00th=[ 255], 00:35:42.684 | 30.00th=[ 266], 40.00th=[ 271], 50.00th=[ 271], 60.00th=[ 275], 00:35:42.684 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 313], 95.00th=[ 334], 00:35:42.684 | 99.00th=[ 372], 99.50th=[ 372], 99.90th=[ 401], 99.95th=[ 401], 00:35:42.684 | 99.99th=[ 401] 00:35:42.684 bw ( KiB/s): min= 128, max= 256, per=3.53%, avg=230.40, stdev=48.81, samples=20 00:35:42.684 iops : min= 32, max= 64, avg=57.60, stdev=12.20, samples=20 00:35:42.684 lat (msec) : 250=12.84%, 500=87.16% 00:35:42.684 cpu : usr=98.34%, sys=1.26%, ctx=14, majf=0, minf=9 00:35:42.684 IO depths : 1=3.4%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:35:42.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.684 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.684 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.684 filename0: (groupid=0, jobs=1): err= 0: pid=1997321: Fri Jul 26 01:18:11 2024 00:35:42.684 read: IOPS=60, BW=241KiB/s (247kB/s)(2432KiB/10073msec) 00:35:42.684 slat (usec): min=11, max=133, avg=66.68, stdev=16.38 00:35:42.684 clat (msec): min=121, max=424, avg=264.52, stdev=45.82 00:35:42.684 lat (msec): min=121, max=424, avg=264.58, stdev=45.83 00:35:42.684 clat percentiles (msec): 00:35:42.684 | 1.00th=[ 128], 5.00th=[ 169], 10.00th=[ 188], 20.00th=[ 251], 00:35:42.684 | 30.00th=[ 264], 40.00th=[ 268], 50.00th=[ 268], 60.00th=[ 271], 00:35:42.684 | 70.00th=[ 284], 80.00th=[ 288], 90.00th=[ 296], 95.00th=[ 321], 00:35:42.684 | 99.00th=[ 393], 99.50th=[ 401], 99.90th=[ 426], 99.95th=[ 426], 00:35:42.685 | 99.99th=[ 426] 00:35:42.685 bw ( KiB/s): min= 128, max= 256, per=3.62%, avg=236.80, stdev=44.84, samples=20 00:35:42.685 iops : min= 32, max= 64, avg=59.20, stdev=11.21, samples=20 00:35:42.685 lat (msec) : 250=17.76%, 500=82.24% 00:35:42.685 cpu : usr=97.55%, sys=1.62%, ctx=42, majf=0, minf=9 00:35:42.685 IO depths : 1=3.5%, 2=9.7%, 4=25.0%, 8=52.8%, 16=9.0%, 32=0.0%, >=64=0.0% 00:35:42.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.685 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.685 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.685 filename0: (groupid=0, jobs=1): err= 0: pid=1997323: Fri Jul 26 01:18:11 2024 00:35:42.685 read: IOPS=84, BW=339KiB/s (347kB/s)(3432KiB/10130msec) 00:35:42.685 slat (nsec): min=8024, max=47420, avg=12829.81, stdev=5976.65 00:35:42.685 clat (msec): min=130, max=292, avg=188.09, stdev=25.09 00:35:42.685 lat (msec): min=130, max=292, avg=188.10, stdev=25.09 00:35:42.685 clat percentiles (msec): 00:35:42.685 | 1.00th=[ 136], 5.00th=[ 155], 10.00th=[ 165], 20.00th=[ 167], 00:35:42.685 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 188], 00:35:42.685 | 70.00th=[ 194], 80.00th=[ 201], 90.00th=[ 215], 95.00th=[ 232], 00:35:42.685 | 99.00th=[ 275], 99.50th=[ 288], 99.90th=[ 292], 99.95th=[ 292], 00:35:42.685 | 99.99th=[ 292] 00:35:42.685 bw ( KiB/s): min= 256, max= 384, per=5.15%, avg=336.80, stdev=42.96, samples=20 00:35:42.685 iops : min= 64, max= 96, avg=84.20, stdev=10.74, samples=20 00:35:42.685 lat (msec) : 250=96.50%, 500=3.50% 00:35:42.685 cpu : usr=98.15%, sys=1.48%, ctx=14, majf=0, minf=9 00:35:42.685 IO depths : 1=0.8%, 2=2.3%, 4=10.5%, 8=74.5%, 16=11.9%, 32=0.0%, >=64=0.0% 00:35:42.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.685 complete : 0=0.0%, 4=89.9%, 8=4.8%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.685 issued rwts: total=858,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.685 filename0: (groupid=0, jobs=1): err= 0: pid=1997324: Fri Jul 26 01:18:11 2024 00:35:42.685 read: IOPS=88, BW=353KiB/s (361kB/s)(3576KiB/10130msec) 00:35:42.685 slat (nsec): min=8256, max=79890, avg=13439.83, stdev=7926.83 00:35:42.685 clat (msec): min=139, max=222, avg=180.91, stdev=16.41 00:35:42.685 lat (msec): min=139, max=222, avg=180.93, stdev=16.41 00:35:42.685 clat percentiles (msec): 00:35:42.685 | 1.00th=[ 140], 5.00th=[ 155], 10.00th=[ 165], 20.00th=[ 167], 00:35:42.685 | 30.00th=[ 169], 40.00th=[ 171], 50.00th=[ 180], 60.00th=[ 184], 00:35:42.685 | 70.00th=[ 190], 80.00th=[ 199], 90.00th=[ 205], 95.00th=[ 207], 00:35:42.685 | 99.00th=[ 222], 99.50th=[ 222], 99.90th=[ 222], 99.95th=[ 222], 00:35:42.685 | 99.99th=[ 222] 00:35:42.685 bw ( KiB/s): min= 256, max= 512, per=5.38%, avg=351.20, stdev=63.04, samples=20 00:35:42.685 iops : min= 64, max= 128, avg=87.80, stdev=15.76, samples=20 00:35:42.685 lat (msec) : 250=100.00% 00:35:42.685 cpu : usr=97.79%, sys=1.61%, ctx=72, majf=0, minf=9 00:35:42.685 IO depths : 1=0.7%, 2=6.9%, 4=25.1%, 8=55.6%, 16=11.7%, 32=0.0%, >=64=0.0% 00:35:42.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.685 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.685 issued rwts: total=894,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.685 filename1: (groupid=0, jobs=1): err= 0: pid=1997325: Fri Jul 26 01:18:11 2024 00:35:42.685 read: IOPS=59, BW=240KiB/s (246kB/s)(2432KiB/10136msec) 00:35:42.685 slat (usec): min=10, max=100, avg=71.03, stdev=13.33 00:35:42.685 clat (msec): min=169, max=317, avg=266.10, stdev=27.14 00:35:42.685 lat (msec): min=169, max=318, avg=266.17, stdev=27.15 00:35:42.685 clat percentiles (msec): 00:35:42.685 | 1.00th=[ 171], 5.00th=[ 197], 10.00th=[ 232], 20.00th=[ 257], 00:35:42.685 | 30.00th=[ 264], 40.00th=[ 266], 50.00th=[ 271], 60.00th=[ 271], 00:35:42.685 | 70.00th=[ 275], 80.00th=[ 288], 90.00th=[ 292], 95.00th=[ 305], 00:35:42.685 | 99.00th=[ 317], 99.50th=[ 317], 99.90th=[ 317], 99.95th=[ 317], 00:35:42.685 | 99.99th=[ 317] 00:35:42.685 bw ( KiB/s): min= 128, max= 256, per=3.62%, avg=236.80, stdev=46.89, samples=20 00:35:42.685 iops : min= 32, max= 64, avg=59.20, stdev=11.72, samples=20 00:35:42.685 lat (msec) : 250=13.16%, 500=86.84% 00:35:42.685 cpu : usr=98.15%, sys=1.32%, ctx=43, majf=0, minf=9 00:35:42.685 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:42.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.685 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.685 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.685 filename1: (groupid=0, jobs=1): err= 0: pid=1997326: Fri Jul 26 01:18:11 2024 00:35:42.685 read: IOPS=87, BW=349KiB/s (358kB/s)(3544KiB/10146msec) 00:35:42.685 slat (nsec): min=5012, max=96313, avg=57748.42, stdev=18667.33 00:35:42.685 clat (msec): min=42, max=316, avg=182.40, stdev=39.10 00:35:42.685 lat (msec): min=42, max=316, avg=182.45, stdev=39.11 00:35:42.685 clat percentiles (msec): 00:35:42.685 | 1.00th=[ 43], 5.00th=[ 117], 10.00th=[ 155], 20.00th=[ 167], 00:35:42.685 | 30.00th=[ 171], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 186], 00:35:42.685 | 70.00th=[ 190], 80.00th=[ 199], 90.00th=[ 215], 95.00th=[ 239], 00:35:42.685 | 99.00th=[ 296], 99.50th=[ 313], 99.90th=[ 317], 99.95th=[ 317], 00:35:42.685 | 99.99th=[ 317] 00:35:42.685 bw ( KiB/s): min= 272, max= 513, per=5.33%, avg=348.05, stdev=56.03, samples=20 00:35:42.685 iops : min= 68, max= 128, avg=87.00, stdev=13.97, samples=20 00:35:42.685 lat (msec) : 50=1.81%, 100=1.81%, 250=91.42%, 500=4.97% 00:35:42.685 cpu : usr=97.84%, sys=1.58%, ctx=43, majf=0, minf=9 00:35:42.685 IO depths : 1=0.6%, 2=1.7%, 4=9.3%, 8=76.3%, 16=12.2%, 32=0.0%, >=64=0.0% 00:35:42.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.685 complete : 0=0.0%, 4=89.5%, 8=5.2%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.685 issued rwts: total=886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.685 filename1: (groupid=0, jobs=1): err= 0: pid=1997328: Fri Jul 26 01:18:11 2024 00:35:42.685 read: IOPS=58, BW=236KiB/s (241kB/s)(2368KiB/10053msec) 00:35:42.685 slat (nsec): min=6174, max=92653, avg=51018.03, stdev=25064.81 00:35:42.685 clat (msec): min=174, max=425, avg=271.25, stdev=28.74 00:35:42.685 lat (msec): min=174, max=425, avg=271.30, stdev=28.74 00:35:42.685 clat percentiles (msec): 00:35:42.685 | 1.00th=[ 186], 5.00th=[ 218], 10.00th=[ 243], 20.00th=[ 264], 00:35:42.685 | 30.00th=[ 268], 40.00th=[ 271], 50.00th=[ 271], 60.00th=[ 275], 00:35:42.685 | 70.00th=[ 288], 80.00th=[ 288], 90.00th=[ 292], 95.00th=[ 309], 00:35:42.685 | 99.00th=[ 342], 99.50th=[ 342], 99.90th=[ 426], 99.95th=[ 426], 00:35:42.685 | 99.99th=[ 426] 00:35:42.685 bw ( KiB/s): min= 128, max= 256, per=3.53%, avg=230.40, stdev=52.53, samples=20 00:35:42.685 iops : min= 32, max= 64, avg=57.60, stdev=13.13, samples=20 00:35:42.685 lat (msec) : 250=13.34%, 500=86.66% 00:35:42.685 cpu : usr=98.00%, sys=1.49%, ctx=27, majf=0, minf=9 00:35:42.685 IO depths : 1=5.6%, 2=11.8%, 4=25.0%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:35:42.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.685 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.685 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.685 filename1: (groupid=0, jobs=1): err= 0: pid=1997329: Fri Jul 26 01:18:11 2024 00:35:42.685 read: IOPS=83, BW=336KiB/s (344kB/s)(3400KiB/10131msec) 00:35:42.685 slat (nsec): min=4696, max=62063, avg=12786.52, stdev=6271.86 00:35:42.685 clat (msec): min=134, max=321, avg=189.41, stdev=23.83 00:35:42.685 lat (msec): min=134, max=321, avg=189.42, stdev=23.83 00:35:42.685 clat percentiles (msec): 00:35:42.685 | 1.00th=[ 144], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 169], 00:35:42.685 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 186], 60.00th=[ 188], 00:35:42.685 | 70.00th=[ 194], 80.00th=[ 197], 90.00th=[ 215], 95.00th=[ 236], 00:35:42.685 | 99.00th=[ 268], 99.50th=[ 271], 99.90th=[ 321], 99.95th=[ 321], 00:35:42.685 | 99.99th=[ 321] 00:35:42.685 bw ( KiB/s): min= 256, max= 384, per=5.10%, avg=333.60, stdev=46.22, samples=20 00:35:42.685 iops : min= 64, max= 96, avg=83.40, stdev=11.55, samples=20 00:35:42.685 lat (msec) : 250=95.06%, 500=4.94% 00:35:42.685 cpu : usr=97.84%, sys=1.60%, ctx=39, majf=0, minf=9 00:35:42.685 IO depths : 1=1.1%, 2=2.5%, 4=10.4%, 8=74.6%, 16=11.5%, 32=0.0%, >=64=0.0% 00:35:42.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.685 complete : 0=0.0%, 4=89.9%, 8=4.6%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.685 issued rwts: total=850,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.685 filename1: (groupid=0, jobs=1): err= 0: pid=1997330: Fri Jul 26 01:18:11 2024 00:35:42.685 read: IOPS=58, BW=234KiB/s (240kB/s)(2368KiB/10120msec) 00:35:42.685 slat (nsec): min=11174, max=91321, avg=61309.60, stdev=17622.70 00:35:42.685 clat (msec): min=183, max=398, avg=273.00, stdev=28.61 00:35:42.685 lat (msec): min=183, max=398, avg=273.07, stdev=28.61 00:35:42.685 clat percentiles (msec): 00:35:42.685 | 1.00th=[ 194], 5.00th=[ 218], 10.00th=[ 243], 20.00th=[ 262], 00:35:42.685 | 30.00th=[ 266], 40.00th=[ 271], 50.00th=[ 271], 60.00th=[ 275], 00:35:42.685 | 70.00th=[ 284], 80.00th=[ 288], 90.00th=[ 305], 95.00th=[ 338], 00:35:42.685 | 99.00th=[ 372], 99.50th=[ 372], 99.90th=[ 397], 99.95th=[ 397], 00:35:42.685 | 99.99th=[ 397] 00:35:42.685 bw ( KiB/s): min= 128, max= 256, per=3.53%, avg=230.40, stdev=50.70, samples=20 00:35:42.685 iops : min= 32, max= 64, avg=57.60, stdev=12.68, samples=20 00:35:42.685 lat (msec) : 250=12.16%, 500=87.84% 00:35:42.685 cpu : usr=97.73%, sys=1.62%, ctx=53, majf=0, minf=9 00:35:42.685 IO depths : 1=4.6%, 2=10.8%, 4=25.0%, 8=51.7%, 16=7.9%, 32=0.0%, >=64=0.0% 00:35:42.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.685 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.685 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.685 filename1: (groupid=0, jobs=1): err= 0: pid=1997332: Fri Jul 26 01:18:11 2024 00:35:42.686 read: IOPS=58, BW=234KiB/s (239kB/s)(2368KiB/10128msec) 00:35:42.686 slat (nsec): min=3929, max=44149, avg=24236.08, stdev=4713.47 00:35:42.686 clat (msec): min=171, max=340, avg=273.40, stdev=22.07 00:35:42.686 lat (msec): min=171, max=340, avg=273.42, stdev=22.07 00:35:42.686 clat percentiles (msec): 00:35:42.686 | 1.00th=[ 218], 5.00th=[ 239], 10.00th=[ 251], 20.00th=[ 264], 00:35:42.686 | 30.00th=[ 268], 40.00th=[ 271], 50.00th=[ 271], 60.00th=[ 275], 00:35:42.686 | 70.00th=[ 284], 80.00th=[ 288], 90.00th=[ 292], 95.00th=[ 313], 00:35:42.686 | 99.00th=[ 338], 99.50th=[ 338], 99.90th=[ 342], 99.95th=[ 342], 00:35:42.686 | 99.99th=[ 342] 00:35:42.686 bw ( KiB/s): min= 128, max= 256, per=3.53%, avg=230.40, stdev=46.83, samples=20 00:35:42.686 iops : min= 32, max= 64, avg=57.60, stdev=11.71, samples=20 00:35:42.686 lat (msec) : 250=8.45%, 500=91.55% 00:35:42.686 cpu : usr=97.43%, sys=1.96%, ctx=10, majf=0, minf=9 00:35:42.686 IO depths : 1=1.9%, 2=8.1%, 4=25.0%, 8=54.4%, 16=10.6%, 32=0.0%, >=64=0.0% 00:35:42.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.686 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.686 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.686 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.686 filename1: (groupid=0, jobs=1): err= 0: pid=1997334: Fri Jul 26 01:18:11 2024 00:35:42.686 read: IOPS=85, BW=342KiB/s (350kB/s)(3464KiB/10130msec) 00:35:42.686 slat (nsec): min=8044, max=78953, avg=14621.03, stdev=10240.45 00:35:42.686 clat (msec): min=122, max=292, avg=185.74, stdev=28.66 00:35:42.686 lat (msec): min=122, max=292, avg=185.75, stdev=28.66 00:35:42.686 clat percentiles (msec): 00:35:42.686 | 1.00th=[ 132], 5.00th=[ 138], 10.00th=[ 153], 20.00th=[ 167], 00:35:42.686 | 30.00th=[ 174], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 188], 00:35:42.686 | 70.00th=[ 194], 80.00th=[ 201], 90.00th=[ 215], 95.00th=[ 245], 00:35:42.686 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 292], 99.95th=[ 292], 00:35:42.686 | 99.99th=[ 292] 00:35:42.686 bw ( KiB/s): min= 256, max= 496, per=5.21%, avg=340.00, stdev=57.54, samples=20 00:35:42.686 iops : min= 64, max= 124, avg=85.00, stdev=14.39, samples=20 00:35:42.686 lat (msec) : 250=95.38%, 500=4.62% 00:35:42.686 cpu : usr=98.33%, sys=1.22%, ctx=41, majf=0, minf=9 00:35:42.686 IO depths : 1=0.2%, 2=1.6%, 4=9.9%, 8=75.6%, 16=12.6%, 32=0.0%, >=64=0.0% 00:35:42.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.686 complete : 0=0.0%, 4=89.8%, 8=5.1%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.686 issued rwts: total=866,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.686 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.686 filename1: (groupid=0, jobs=1): err= 0: pid=1997335: Fri Jul 26 01:18:11 2024 00:35:42.686 read: IOPS=86, BW=345KiB/s (353kB/s)(3496KiB/10146msec) 00:35:42.686 slat (nsec): min=5139, max=76491, avg=11522.39, stdev=7607.88 00:35:42.686 clat (msec): min=125, max=296, avg=185.15, stdev=26.25 00:35:42.686 lat (msec): min=125, max=296, avg=185.16, stdev=26.25 00:35:42.686 clat percentiles (msec): 00:35:42.686 | 1.00th=[ 128], 5.00th=[ 140], 10.00th=[ 161], 20.00th=[ 167], 00:35:42.686 | 30.00th=[ 174], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 188], 00:35:42.686 | 70.00th=[ 192], 80.00th=[ 199], 90.00th=[ 211], 95.00th=[ 234], 00:35:42.686 | 99.00th=[ 275], 99.50th=[ 296], 99.90th=[ 296], 99.95th=[ 296], 00:35:42.686 | 99.99th=[ 296] 00:35:42.686 bw ( KiB/s): min= 256, max= 432, per=5.26%, avg=343.20, stdev=42.96, samples=20 00:35:42.686 iops : min= 64, max= 108, avg=85.80, stdev=10.74, samples=20 00:35:42.686 lat (msec) : 250=97.03%, 500=2.97% 00:35:42.686 cpu : usr=98.36%, sys=1.26%, ctx=19, majf=0, minf=9 00:35:42.686 IO depths : 1=0.3%, 2=1.0%, 4=8.0%, 8=78.3%, 16=12.4%, 32=0.0%, >=64=0.0% 00:35:42.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.686 complete : 0=0.0%, 4=89.2%, 8=5.5%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.686 issued rwts: total=874,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.686 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.686 filename2: (groupid=0, jobs=1): err= 0: pid=1997336: Fri Jul 26 01:18:11 2024 00:35:42.686 read: IOPS=58, BW=234KiB/s (240kB/s)(2368KiB/10120msec) 00:35:42.686 slat (nsec): min=18556, max=97696, avg=69087.05, stdev=12094.33 00:35:42.686 clat (msec): min=155, max=419, avg=272.91, stdev=36.73 00:35:42.686 lat (msec): min=155, max=419, avg=272.98, stdev=36.73 00:35:42.686 clat percentiles (msec): 00:35:42.686 | 1.00th=[ 169], 5.00th=[ 232], 10.00th=[ 253], 20.00th=[ 259], 00:35:42.686 | 30.00th=[ 264], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 275], 00:35:42.686 | 70.00th=[ 284], 80.00th=[ 288], 90.00th=[ 292], 95.00th=[ 317], 00:35:42.686 | 99.00th=[ 397], 99.50th=[ 397], 99.90th=[ 418], 99.95th=[ 418], 00:35:42.686 | 99.99th=[ 418] 00:35:42.686 bw ( KiB/s): min= 128, max= 272, per=3.53%, avg=230.40, stdev=52.79, samples=20 00:35:42.686 iops : min= 32, max= 68, avg=57.60, stdev=13.20, samples=20 00:35:42.686 lat (msec) : 250=9.80%, 500=90.20% 00:35:42.686 cpu : usr=97.90%, sys=1.48%, ctx=9, majf=0, minf=9 00:35:42.686 IO depths : 1=4.9%, 2=11.1%, 4=25.0%, 8=51.4%, 16=7.6%, 32=0.0%, >=64=0.0% 00:35:42.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.686 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.686 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.686 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.686 filename2: (groupid=0, jobs=1): err= 0: pid=1997338: Fri Jul 26 01:18:11 2024 00:35:42.686 read: IOPS=58, BW=234KiB/s (239kB/s)(2368KiB/10130msec) 00:35:42.686 slat (usec): min=11, max=129, avg=58.32, stdev=23.78 00:35:42.686 clat (msec): min=162, max=404, avg=271.59, stdev=37.66 00:35:42.686 lat (msec): min=162, max=404, avg=271.65, stdev=37.67 00:35:42.686 clat percentiles (msec): 00:35:42.686 | 1.00th=[ 169], 5.00th=[ 184], 10.00th=[ 234], 20.00th=[ 262], 00:35:42.686 | 30.00th=[ 266], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 275], 00:35:42.686 | 70.00th=[ 288], 80.00th=[ 288], 90.00th=[ 296], 95.00th=[ 351], 00:35:42.686 | 99.00th=[ 397], 99.50th=[ 405], 99.90th=[ 405], 99.95th=[ 405], 00:35:42.686 | 99.99th=[ 405] 00:35:42.686 bw ( KiB/s): min= 128, max= 384, per=3.53%, avg=230.40, stdev=64.08, samples=20 00:35:42.686 iops : min= 32, max= 96, avg=57.60, stdev=16.02, samples=20 00:35:42.686 lat (msec) : 250=13.18%, 500=86.82% 00:35:42.686 cpu : usr=96.06%, sys=2.29%, ctx=68, majf=0, minf=9 00:35:42.686 IO depths : 1=3.2%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.3%, 32=0.0%, >=64=0.0% 00:35:42.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.686 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.686 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.686 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.686 filename2: (groupid=0, jobs=1): err= 0: pid=1997339: Fri Jul 26 01:18:11 2024 00:35:42.686 read: IOPS=81, BW=326KiB/s (334kB/s)(3304KiB/10130msec) 00:35:42.686 slat (usec): min=8, max=103, avg=57.52, stdev=18.67 00:35:42.686 clat (msec): min=147, max=296, avg=194.64, stdev=35.49 00:35:42.686 lat (msec): min=147, max=296, avg=194.70, stdev=35.50 00:35:42.686 clat percentiles (msec): 00:35:42.686 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 167], 00:35:42.686 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 186], 60.00th=[ 190], 00:35:42.686 | 70.00th=[ 205], 80.00th=[ 215], 90.00th=[ 262], 95.00th=[ 275], 00:35:42.686 | 99.00th=[ 296], 99.50th=[ 296], 99.90th=[ 296], 99.95th=[ 296], 00:35:42.686 | 99.99th=[ 296] 00:35:42.686 bw ( KiB/s): min= 256, max= 384, per=4.97%, avg=324.00, stdev=37.03, samples=20 00:35:42.686 iops : min= 64, max= 96, avg=81.00, stdev= 9.26, samples=20 00:35:42.686 lat (msec) : 250=87.89%, 500=12.11% 00:35:42.686 cpu : usr=97.70%, sys=1.62%, ctx=29, majf=0, minf=9 00:35:42.686 IO depths : 1=0.5%, 2=1.6%, 4=8.5%, 8=76.8%, 16=12.7%, 32=0.0%, >=64=0.0% 00:35:42.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.686 complete : 0=0.0%, 4=89.2%, 8=6.2%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.686 issued rwts: total=826,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.686 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.686 filename2: (groupid=0, jobs=1): err= 0: pid=1997340: Fri Jul 26 01:18:11 2024 00:35:42.686 read: IOPS=58, BW=234KiB/s (240kB/s)(2368KiB/10115msec) 00:35:42.686 slat (nsec): min=10391, max=98984, avg=66329.10, stdev=11136.41 00:35:42.686 clat (msec): min=215, max=330, avg=272.75, stdev=20.24 00:35:42.686 lat (msec): min=215, max=330, avg=272.82, stdev=20.24 00:35:42.686 clat percentiles (msec): 00:35:42.686 | 1.00th=[ 218], 5.00th=[ 239], 10.00th=[ 251], 20.00th=[ 264], 00:35:42.686 | 30.00th=[ 268], 40.00th=[ 271], 50.00th=[ 271], 60.00th=[ 275], 00:35:42.686 | 70.00th=[ 279], 80.00th=[ 288], 90.00th=[ 292], 95.00th=[ 313], 00:35:42.686 | 99.00th=[ 330], 99.50th=[ 330], 99.90th=[ 330], 99.95th=[ 330], 00:35:42.686 | 99.99th=[ 330] 00:35:42.686 bw ( KiB/s): min= 128, max= 256, per=3.53%, avg=230.40, stdev=52.53, samples=20 00:35:42.686 iops : min= 32, max= 64, avg=57.60, stdev=13.13, samples=20 00:35:42.686 lat (msec) : 250=10.30%, 500=89.70% 00:35:42.686 cpu : usr=97.78%, sys=1.48%, ctx=25, majf=0, minf=9 00:35:42.686 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:42.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.686 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.686 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.686 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.686 filename2: (groupid=0, jobs=1): err= 0: pid=1997342: Fri Jul 26 01:18:11 2024 00:35:42.686 read: IOPS=88, BW=354KiB/s (362kB/s)(3592KiB/10147msec) 00:35:42.686 slat (nsec): min=4080, max=46200, avg=10930.45, stdev=4529.11 00:35:42.686 clat (msec): min=34, max=294, avg=179.96, stdev=44.09 00:35:42.686 lat (msec): min=34, max=294, avg=179.97, stdev=44.09 00:35:42.686 clat percentiles (msec): 00:35:42.686 | 1.00th=[ 35], 5.00th=[ 92], 10.00th=[ 140], 20.00th=[ 165], 00:35:42.686 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 184], 60.00th=[ 186], 00:35:42.686 | 70.00th=[ 190], 80.00th=[ 203], 90.00th=[ 232], 95.00th=[ 264], 00:35:42.686 | 99.00th=[ 292], 99.50th=[ 296], 99.90th=[ 296], 99.95th=[ 296], 00:35:42.686 | 99.99th=[ 296] 00:35:42.686 bw ( KiB/s): min= 272, max= 512, per=5.39%, avg=352.80, stdev=64.52, samples=20 00:35:42.686 iops : min= 68, max= 128, avg=88.20, stdev=16.13, samples=20 00:35:42.686 lat (msec) : 50=3.56%, 100=1.78%, 250=89.53%, 500=5.12% 00:35:42.687 cpu : usr=98.13%, sys=1.52%, ctx=10, majf=0, minf=9 00:35:42.687 IO depths : 1=0.4%, 2=1.8%, 4=9.7%, 8=75.7%, 16=12.4%, 32=0.0%, >=64=0.0% 00:35:42.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.687 complete : 0=0.0%, 4=89.7%, 8=5.1%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.687 issued rwts: total=898,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.687 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.687 filename2: (groupid=0, jobs=1): err= 0: pid=1997343: Fri Jul 26 01:18:11 2024 00:35:42.687 read: IOPS=58, BW=234KiB/s (239kB/s)(2368KiB/10136msec) 00:35:42.687 slat (usec): min=12, max=129, avg=60.72, stdev=16.82 00:35:42.687 clat (msec): min=176, max=410, avg=273.32, stdev=33.71 00:35:42.687 lat (msec): min=176, max=410, avg=273.38, stdev=33.70 00:35:42.687 clat percentiles (msec): 00:35:42.687 | 1.00th=[ 188], 5.00th=[ 207], 10.00th=[ 239], 20.00th=[ 255], 00:35:42.687 | 30.00th=[ 264], 40.00th=[ 271], 50.00th=[ 271], 60.00th=[ 275], 00:35:42.687 | 70.00th=[ 288], 80.00th=[ 288], 90.00th=[ 313], 95.00th=[ 338], 00:35:42.687 | 99.00th=[ 372], 99.50th=[ 397], 99.90th=[ 409], 99.95th=[ 409], 00:35:42.687 | 99.99th=[ 409] 00:35:42.687 bw ( KiB/s): min= 128, max= 256, per=3.53%, avg=230.40, stdev=48.81, samples=20 00:35:42.687 iops : min= 32, max= 64, avg=57.60, stdev=12.20, samples=20 00:35:42.687 lat (msec) : 250=13.85%, 500=86.15% 00:35:42.687 cpu : usr=97.98%, sys=1.47%, ctx=81, majf=0, minf=9 00:35:42.687 IO depths : 1=3.2%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.3%, 32=0.0%, >=64=0.0% 00:35:42.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.687 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.687 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.687 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.687 filename2: (groupid=0, jobs=1): err= 0: pid=1997344: Fri Jul 26 01:18:11 2024 00:35:42.687 read: IOPS=58, BW=234KiB/s (240kB/s)(2368KiB/10122msec) 00:35:42.687 slat (nsec): min=6328, max=95362, avg=53449.01, stdev=25161.16 00:35:42.687 clat (msec): min=175, max=402, avg=271.39, stdev=34.37 00:35:42.687 lat (msec): min=176, max=402, avg=271.45, stdev=34.37 00:35:42.687 clat percentiles (msec): 00:35:42.687 | 1.00th=[ 186], 5.00th=[ 201], 10.00th=[ 218], 20.00th=[ 255], 00:35:42.687 | 30.00th=[ 266], 40.00th=[ 271], 50.00th=[ 271], 60.00th=[ 275], 00:35:42.687 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 305], 95.00th=[ 338], 00:35:42.687 | 99.00th=[ 372], 99.50th=[ 384], 99.90th=[ 401], 99.95th=[ 401], 00:35:42.687 | 99.99th=[ 401] 00:35:42.687 bw ( KiB/s): min= 128, max= 256, per=3.53%, avg=230.40, stdev=48.81, samples=20 00:35:42.687 iops : min= 32, max= 64, avg=57.60, stdev=12.20, samples=20 00:35:42.687 lat (msec) : 250=15.54%, 500=84.46% 00:35:42.687 cpu : usr=98.15%, sys=1.38%, ctx=18, majf=0, minf=9 00:35:42.687 IO depths : 1=3.0%, 2=9.3%, 4=25.0%, 8=53.2%, 16=9.5%, 32=0.0%, >=64=0.0% 00:35:42.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.687 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.687 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.687 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.687 filename2: (groupid=0, jobs=1): err= 0: pid=1997345: Fri Jul 26 01:18:11 2024 00:35:42.687 read: IOPS=58, BW=234KiB/s (240kB/s)(2368KiB/10120msec) 00:35:42.687 slat (nsec): min=8367, max=94227, avg=51241.44, stdev=26877.50 00:35:42.687 clat (msec): min=147, max=395, avg=273.06, stdev=44.83 00:35:42.687 lat (msec): min=147, max=395, avg=273.12, stdev=44.82 00:35:42.687 clat percentiles (msec): 00:35:42.687 | 1.00th=[ 159], 5.00th=[ 171], 10.00th=[ 232], 20.00th=[ 259], 00:35:42.687 | 30.00th=[ 264], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 275], 00:35:42.687 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 317], 95.00th=[ 372], 00:35:42.687 | 99.00th=[ 397], 99.50th=[ 397], 99.90th=[ 397], 99.95th=[ 397], 00:35:42.687 | 99.99th=[ 397] 00:35:42.687 bw ( KiB/s): min= 128, max= 256, per=3.53%, avg=230.40, stdev=50.70, samples=20 00:35:42.687 iops : min= 32, max= 64, avg=57.60, stdev=12.68, samples=20 00:35:42.687 lat (msec) : 250=12.84%, 500=87.16% 00:35:42.687 cpu : usr=98.04%, sys=1.48%, ctx=68, majf=0, minf=9 00:35:42.687 IO depths : 1=3.5%, 2=9.8%, 4=25.0%, 8=52.7%, 16=9.0%, 32=0.0%, >=64=0.0% 00:35:42.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.687 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.687 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.687 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:42.687 00:35:42.687 Run status group 0 (all jobs): 00:35:42.687 READ: bw=6525KiB/s (6681kB/s), 234KiB/s-354KiB/s (239kB/s-362kB/s), io=64.7MiB (67.9MB), run=10053-10156msec 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.687 bdev_null0 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.687 [2024-07-26 01:18:11.893051] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:42.687 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.688 bdev_null1 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:42.688 { 00:35:42.688 "params": { 00:35:42.688 "name": "Nvme$subsystem", 00:35:42.688 "trtype": "$TEST_TRANSPORT", 00:35:42.688 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:42.688 "adrfam": "ipv4", 00:35:42.688 "trsvcid": "$NVMF_PORT", 00:35:42.688 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:42.688 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:42.688 "hdgst": ${hdgst:-false}, 00:35:42.688 "ddgst": ${ddgst:-false} 00:35:42.688 }, 00:35:42.688 "method": "bdev_nvme_attach_controller" 00:35:42.688 } 00:35:42.688 EOF 00:35:42.688 )") 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:42.688 { 00:35:42.688 "params": { 00:35:42.688 "name": "Nvme$subsystem", 00:35:42.688 "trtype": "$TEST_TRANSPORT", 00:35:42.688 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:42.688 "adrfam": "ipv4", 00:35:42.688 "trsvcid": "$NVMF_PORT", 00:35:42.688 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:42.688 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:42.688 "hdgst": ${hdgst:-false}, 00:35:42.688 "ddgst": ${ddgst:-false} 00:35:42.688 }, 00:35:42.688 "method": "bdev_nvme_attach_controller" 00:35:42.688 } 00:35:42.688 EOF 00:35:42.688 )") 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:42.688 "params": { 00:35:42.688 "name": "Nvme0", 00:35:42.688 "trtype": "tcp", 00:35:42.688 "traddr": "10.0.0.2", 00:35:42.688 "adrfam": "ipv4", 00:35:42.688 "trsvcid": "4420", 00:35:42.688 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:42.688 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:42.688 "hdgst": false, 00:35:42.688 "ddgst": false 00:35:42.688 }, 00:35:42.688 "method": "bdev_nvme_attach_controller" 00:35:42.688 },{ 00:35:42.688 "params": { 00:35:42.688 "name": "Nvme1", 00:35:42.688 "trtype": "tcp", 00:35:42.688 "traddr": "10.0.0.2", 00:35:42.688 "adrfam": "ipv4", 00:35:42.688 "trsvcid": "4420", 00:35:42.688 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:42.688 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:42.688 "hdgst": false, 00:35:42.688 "ddgst": false 00:35:42.688 }, 00:35:42.688 "method": "bdev_nvme_attach_controller" 00:35:42.688 }' 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:42.688 01:18:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:42.688 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:42.688 ... 00:35:42.688 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:42.688 ... 00:35:42.688 fio-3.35 00:35:42.688 Starting 4 threads 00:35:42.688 EAL: No free 2048 kB hugepages reported on node 1 00:35:47.993 00:35:47.993 filename0: (groupid=0, jobs=1): err= 0: pid=1999275: Fri Jul 26 01:18:17 2024 00:35:47.993 read: IOPS=2011, BW=15.7MiB/s (16.5MB/s)(78.6MiB/5002msec) 00:35:47.993 slat (usec): min=4, max=198, avg=15.50, stdev= 6.94 00:35:47.993 clat (usec): min=802, max=10966, avg=3926.84, stdev=641.57 00:35:47.993 lat (usec): min=820, max=10980, avg=3942.34, stdev=641.37 00:35:47.993 clat percentiles (usec): 00:35:47.993 | 1.00th=[ 2376], 5.00th=[ 2966], 10.00th=[ 3261], 20.00th=[ 3589], 00:35:47.993 | 30.00th=[ 3752], 40.00th=[ 3916], 50.00th=[ 3982], 60.00th=[ 4015], 00:35:47.993 | 70.00th=[ 4080], 80.00th=[ 4113], 90.00th=[ 4359], 95.00th=[ 5080], 00:35:47.993 | 99.00th=[ 6128], 99.50th=[ 6521], 99.90th=[ 7308], 99.95th=[10945], 00:35:47.993 | 99.99th=[10945] 00:35:47.993 bw ( KiB/s): min=15552, max=17504, per=25.75%, avg=16133.33, stdev=633.21, samples=9 00:35:47.993 iops : min= 1944, max= 2188, avg=2016.67, stdev=79.15, samples=9 00:35:47.993 lat (usec) : 1000=0.01% 00:35:47.993 lat (msec) : 2=0.43%, 4=54.12%, 10=45.37%, 20=0.08% 00:35:47.993 cpu : usr=87.82%, sys=8.98%, ctx=315, majf=0, minf=0 00:35:47.993 IO depths : 1=0.1%, 2=9.5%, 4=62.3%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:47.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.993 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.993 issued rwts: total=10060,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:47.993 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:47.993 filename0: (groupid=0, jobs=1): err= 0: pid=1999276: Fri Jul 26 01:18:17 2024 00:35:47.993 read: IOPS=1923, BW=15.0MiB/s (15.8MB/s)(75.2MiB/5003msec) 00:35:47.993 slat (nsec): min=4510, max=32524, avg=13331.72, stdev=4172.12 00:35:47.993 clat (usec): min=819, max=9000, avg=4114.55, stdev=640.09 00:35:47.993 lat (usec): min=832, max=9014, avg=4127.88, stdev=639.63 00:35:47.993 clat percentiles (usec): 00:35:47.993 | 1.00th=[ 2802], 5.00th=[ 3458], 10.00th=[ 3621], 20.00th=[ 3818], 00:35:47.993 | 30.00th=[ 3916], 40.00th=[ 3982], 50.00th=[ 4015], 60.00th=[ 4080], 00:35:47.993 | 70.00th=[ 4113], 80.00th=[ 4228], 90.00th=[ 4621], 95.00th=[ 5604], 00:35:47.993 | 99.00th=[ 6521], 99.50th=[ 6718], 99.90th=[ 7373], 99.95th=[ 8848], 00:35:47.993 | 99.99th=[ 8979] 00:35:47.993 bw ( KiB/s): min=14944, max=16096, per=24.55%, avg=15380.80, stdev=319.65, samples=10 00:35:47.993 iops : min= 1868, max= 2012, avg=1922.60, stdev=39.96, samples=10 00:35:47.993 lat (usec) : 1000=0.06% 00:35:47.993 lat (msec) : 2=0.38%, 4=44.09%, 10=55.46% 00:35:47.993 cpu : usr=93.24%, sys=6.02%, ctx=9, majf=0, minf=0 00:35:47.993 IO depths : 1=0.1%, 2=9.1%, 4=63.6%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:47.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.993 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.993 issued rwts: total=9621,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:47.993 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:47.993 filename1: (groupid=0, jobs=1): err= 0: pid=1999277: Fri Jul 26 01:18:17 2024 00:35:47.993 read: IOPS=1963, BW=15.3MiB/s (16.1MB/s)(76.7MiB/5002msec) 00:35:47.994 slat (nsec): min=4566, max=30106, avg=12776.80, stdev=3527.71 00:35:47.994 clat (usec): min=777, max=8423, avg=4032.21, stdev=612.16 00:35:47.994 lat (usec): min=790, max=8435, avg=4044.98, stdev=612.14 00:35:47.994 clat percentiles (usec): 00:35:47.994 | 1.00th=[ 2540], 5.00th=[ 3163], 10.00th=[ 3490], 20.00th=[ 3752], 00:35:47.994 | 30.00th=[ 3884], 40.00th=[ 3982], 50.00th=[ 4015], 60.00th=[ 4047], 00:35:47.994 | 70.00th=[ 4080], 80.00th=[ 4178], 90.00th=[ 4555], 95.00th=[ 5211], 00:35:47.994 | 99.00th=[ 6456], 99.50th=[ 6783], 99.90th=[ 7439], 99.95th=[ 8225], 00:35:47.994 | 99.99th=[ 8455] 00:35:47.994 bw ( KiB/s): min=14976, max=16224, per=25.05%, avg=15699.20, stdev=416.12, samples=10 00:35:47.994 iops : min= 1872, max= 2028, avg=1962.40, stdev=52.02, samples=10 00:35:47.994 lat (usec) : 1000=0.07% 00:35:47.994 lat (msec) : 2=0.40%, 4=46.01%, 10=53.52% 00:35:47.994 cpu : usr=92.60%, sys=6.66%, ctx=9, majf=0, minf=9 00:35:47.994 IO depths : 1=0.1%, 2=9.7%, 4=62.1%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:47.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.994 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.994 issued rwts: total=9820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:47.994 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:47.994 filename1: (groupid=0, jobs=1): err= 0: pid=1999278: Fri Jul 26 01:18:17 2024 00:35:47.994 read: IOPS=1935, BW=15.1MiB/s (15.9MB/s)(75.7MiB/5003msec) 00:35:47.994 slat (nsec): min=4501, max=32713, avg=12168.92, stdev=3365.31 00:35:47.994 clat (usec): min=860, max=9541, avg=4092.59, stdev=659.47 00:35:47.994 lat (usec): min=874, max=9554, avg=4104.76, stdev=659.20 00:35:47.994 clat percentiles (usec): 00:35:47.994 | 1.00th=[ 2737], 5.00th=[ 3326], 10.00th=[ 3556], 20.00th=[ 3752], 00:35:47.994 | 30.00th=[ 3916], 40.00th=[ 3982], 50.00th=[ 4015], 60.00th=[ 4047], 00:35:47.994 | 70.00th=[ 4113], 80.00th=[ 4228], 90.00th=[ 4752], 95.00th=[ 5604], 00:35:47.994 | 99.00th=[ 6456], 99.50th=[ 6915], 99.90th=[ 7308], 99.95th=[ 9372], 00:35:47.994 | 99.99th=[ 9503] 00:35:47.994 bw ( KiB/s): min=14989, max=15968, per=24.71%, avg=15486.10, stdev=334.03, samples=10 00:35:47.994 iops : min= 1873, max= 1996, avg=1935.70, stdev=41.86, samples=10 00:35:47.994 lat (usec) : 1000=0.01% 00:35:47.994 lat (msec) : 2=0.26%, 4=44.89%, 10=54.84% 00:35:47.994 cpu : usr=93.12%, sys=6.20%, ctx=7, majf=0, minf=0 00:35:47.994 IO depths : 1=0.1%, 2=9.0%, 4=63.5%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:47.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.994 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.994 issued rwts: total=9684,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:47.994 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:47.994 00:35:47.994 Run status group 0 (all jobs): 00:35:47.994 READ: bw=61.2MiB/s (64.2MB/s), 15.0MiB/s-15.7MiB/s (15.8MB/s-16.5MB/s), io=306MiB (321MB), run=5002-5003msec 00:35:47.994 01:18:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:47.994 01:18:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:47.994 01:18:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:47.994 01:18:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:47.994 01:18:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:47.994 01:18:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:47.994 01:18:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.994 01:18:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:47.994 01:18:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.994 01:18:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:47.994 01:18:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.994 01:18:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:47.994 01:18:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.994 01:18:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:47.994 01:18:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:47.994 01:18:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:47.994 01:18:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:47.994 01:18:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.994 01:18:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:47.994 01:18:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.994 01:18:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:47.994 01:18:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.994 01:18:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:47.994 01:18:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.994 00:35:47.994 real 0m24.279s 00:35:47.994 user 4m34.420s 00:35:47.994 sys 0m7.162s 00:35:47.994 01:18:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:47.994 01:18:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:47.994 ************************************ 00:35:47.994 END TEST fio_dif_rand_params 00:35:47.994 ************************************ 00:35:47.994 01:18:18 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:47.994 01:18:18 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:47.994 01:18:18 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:47.994 01:18:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:47.994 ************************************ 00:35:47.994 START TEST fio_dif_digest 00:35:47.994 ************************************ 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:47.994 bdev_null0 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:47.994 [2024-07-26 01:18:18.268033] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:35:47.994 01:18:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:47.994 { 00:35:47.994 "params": { 00:35:47.994 "name": "Nvme$subsystem", 00:35:47.994 "trtype": "$TEST_TRANSPORT", 00:35:47.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:47.995 "adrfam": "ipv4", 00:35:47.995 "trsvcid": "$NVMF_PORT", 00:35:47.995 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:47.995 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:47.995 "hdgst": ${hdgst:-false}, 00:35:47.995 "ddgst": ${ddgst:-false} 00:35:47.995 }, 00:35:47.995 "method": "bdev_nvme_attach_controller" 00:35:47.995 } 00:35:47.995 EOF 00:35:47.995 )") 00:35:47.995 01:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:47.995 01:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:47.995 01:18:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:35:47.995 01:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:47.995 01:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:35:47.995 01:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:47.995 01:18:18 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:47.995 01:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:47.995 01:18:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:35:47.995 01:18:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:35:47.995 01:18:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:47.995 "params": { 00:35:47.995 "name": "Nvme0", 00:35:47.995 "trtype": "tcp", 00:35:47.995 "traddr": "10.0.0.2", 00:35:47.995 "adrfam": "ipv4", 00:35:47.995 "trsvcid": "4420", 00:35:47.995 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:47.995 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:47.995 "hdgst": true, 00:35:47.995 "ddgst": true 00:35:47.995 }, 00:35:47.995 "method": "bdev_nvme_attach_controller" 00:35:47.995 }' 00:35:47.995 01:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:47.995 01:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:47.995 01:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:47.995 01:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:47.995 01:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:47.995 01:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:47.995 01:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:47.995 01:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:47.995 01:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:47.995 01:18:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:48.253 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:48.253 ... 00:35:48.253 fio-3.35 00:35:48.253 Starting 3 threads 00:35:48.253 EAL: No free 2048 kB hugepages reported on node 1 00:36:00.446 00:36:00.446 filename0: (groupid=0, jobs=1): err= 0: pid=2000036: Fri Jul 26 01:18:28 2024 00:36:00.446 read: IOPS=193, BW=24.2MiB/s (25.4MB/s)(242MiB/10007msec) 00:36:00.446 slat (nsec): min=4717, max=61697, avg=16354.42, stdev=6328.97 00:36:00.446 clat (usec): min=8368, max=19458, avg=15471.07, stdev=1218.19 00:36:00.446 lat (usec): min=8380, max=19487, avg=15487.43, stdev=1218.05 00:36:00.446 clat percentiles (usec): 00:36:00.446 | 1.00th=[12649], 5.00th=[13435], 10.00th=[13960], 20.00th=[14484], 00:36:00.446 | 30.00th=[14877], 40.00th=[15139], 50.00th=[15401], 60.00th=[15795], 00:36:00.446 | 70.00th=[16057], 80.00th=[16450], 90.00th=[17171], 95.00th=[17433], 00:36:00.446 | 99.00th=[18220], 99.50th=[19006], 99.90th=[19530], 99.95th=[19530], 00:36:00.446 | 99.99th=[19530] 00:36:00.446 bw ( KiB/s): min=24064, max=26368, per=32.63%, avg=24768.00, stdev=679.22, samples=20 00:36:00.446 iops : min= 188, max= 206, avg=193.50, stdev= 5.31, samples=20 00:36:00.446 lat (msec) : 10=0.05%, 20=99.95% 00:36:00.446 cpu : usr=90.68%, sys=8.39%, ctx=163, majf=0, minf=73 00:36:00.446 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:00.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:00.446 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:00.446 issued rwts: total=1938,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:00.446 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:00.446 filename0: (groupid=0, jobs=1): err= 0: pid=2000037: Fri Jul 26 01:18:28 2024 00:36:00.446 read: IOPS=200, BW=25.0MiB/s (26.2MB/s)(250MiB/10004msec) 00:36:00.446 slat (nsec): min=4401, max=52049, avg=17247.58, stdev=6928.51 00:36:00.446 clat (usec): min=9987, max=23604, avg=14960.12, stdev=1138.60 00:36:00.446 lat (usec): min=10009, max=23615, avg=14977.36, stdev=1138.05 00:36:00.446 clat percentiles (usec): 00:36:00.446 | 1.00th=[12387], 5.00th=[13173], 10.00th=[13566], 20.00th=[14091], 00:36:00.446 | 30.00th=[14484], 40.00th=[14615], 50.00th=[14877], 60.00th=[15270], 00:36:00.446 | 70.00th=[15533], 80.00th=[15795], 90.00th=[16319], 95.00th=[16909], 00:36:00.446 | 99.00th=[17695], 99.50th=[17957], 99.90th=[21890], 99.95th=[21890], 00:36:00.446 | 99.99th=[23725] 00:36:00.446 bw ( KiB/s): min=24576, max=26880, per=33.74%, avg=25612.80, stdev=578.28, samples=20 00:36:00.446 iops : min= 192, max= 210, avg=200.10, stdev= 4.52, samples=20 00:36:00.446 lat (msec) : 10=0.05%, 20=99.80%, 50=0.15% 00:36:00.446 cpu : usr=90.40%, sys=8.05%, ctx=280, majf=0, minf=134 00:36:00.446 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:00.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:00.446 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:00.446 issued rwts: total=2003,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:00.446 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:00.446 filename0: (groupid=0, jobs=1): err= 0: pid=2000038: Fri Jul 26 01:18:28 2024 00:36:00.446 read: IOPS=199, BW=24.9MiB/s (26.1MB/s)(249MiB/10007msec) 00:36:00.446 slat (nsec): min=4864, max=67633, avg=15287.13, stdev=4773.65 00:36:00.446 clat (usec): min=6366, max=19392, avg=15034.45, stdev=1156.67 00:36:00.446 lat (usec): min=6379, max=19411, avg=15049.74, stdev=1156.44 00:36:00.446 clat percentiles (usec): 00:36:00.446 | 1.00th=[12387], 5.00th=[13173], 10.00th=[13566], 20.00th=[14091], 00:36:00.446 | 30.00th=[14484], 40.00th=[14746], 50.00th=[15008], 60.00th=[15270], 00:36:00.446 | 70.00th=[15533], 80.00th=[15926], 90.00th=[16450], 95.00th=[16909], 00:36:00.446 | 99.00th=[17957], 99.50th=[18220], 99.90th=[18744], 99.95th=[19268], 00:36:00.446 | 99.99th=[19268] 00:36:00.446 bw ( KiB/s): min=24320, max=27136, per=33.59%, avg=25497.60, stdev=676.80, samples=20 00:36:00.446 iops : min= 190, max= 212, avg=199.20, stdev= 5.29, samples=20 00:36:00.446 lat (msec) : 10=0.05%, 20=99.95% 00:36:00.446 cpu : usr=91.94%, sys=7.59%, ctx=20, majf=0, minf=170 00:36:00.446 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:00.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:00.446 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:00.446 issued rwts: total=1994,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:00.446 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:00.446 00:36:00.446 Run status group 0 (all jobs): 00:36:00.446 READ: bw=74.1MiB/s (77.7MB/s), 24.2MiB/s-25.0MiB/s (25.4MB/s-26.2MB/s), io=742MiB (778MB), run=10004-10007msec 00:36:00.446 01:18:29 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:00.446 01:18:29 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:00.446 01:18:29 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:00.446 01:18:29 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:00.446 01:18:29 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:00.446 01:18:29 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:00.446 01:18:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.446 01:18:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:00.446 01:18:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.446 01:18:29 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:00.446 01:18:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.446 01:18:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:00.446 01:18:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.446 00:36:00.446 real 0m10.996s 00:36:00.446 user 0m28.340s 00:36:00.446 sys 0m2.660s 00:36:00.446 01:18:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:00.446 01:18:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:00.446 ************************************ 00:36:00.446 END TEST fio_dif_digest 00:36:00.446 ************************************ 00:36:00.446 01:18:29 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:00.446 01:18:29 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:00.446 01:18:29 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:00.446 01:18:29 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:36:00.446 01:18:29 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:00.446 01:18:29 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:36:00.446 01:18:29 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:00.446 01:18:29 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:00.446 rmmod nvme_tcp 00:36:00.446 rmmod nvme_fabrics 00:36:00.446 rmmod nvme_keyring 00:36:00.446 01:18:29 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:00.446 01:18:29 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:36:00.446 01:18:29 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:36:00.446 01:18:29 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1993376 ']' 00:36:00.446 01:18:29 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1993376 00:36:00.446 01:18:29 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 1993376 ']' 00:36:00.446 01:18:29 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 1993376 00:36:00.446 01:18:29 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:36:00.446 01:18:29 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:00.446 01:18:29 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1993376 00:36:00.446 01:18:29 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:00.446 01:18:29 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:00.446 01:18:29 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1993376' 00:36:00.446 killing process with pid 1993376 00:36:00.446 01:18:29 nvmf_dif -- common/autotest_common.sh@969 -- # kill 1993376 00:36:00.446 01:18:29 nvmf_dif -- common/autotest_common.sh@974 -- # wait 1993376 00:36:00.446 01:18:29 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:00.446 01:18:29 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:00.446 Waiting for block devices as requested 00:36:00.446 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:00.446 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:00.704 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:00.704 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:00.704 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:00.704 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:00.963 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:00.963 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:00.963 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:00.963 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:00.963 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:01.222 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:01.222 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:01.222 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:01.482 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:01.482 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:01.482 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:01.740 01:18:31 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:01.740 01:18:31 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:01.740 01:18:31 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:01.740 01:18:31 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:01.741 01:18:31 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:01.741 01:18:31 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:01.741 01:18:31 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:03.644 01:18:33 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:03.644 00:36:03.644 real 1m6.205s 00:36:03.644 user 6m29.997s 00:36:03.644 sys 0m18.695s 00:36:03.644 01:18:33 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:03.644 01:18:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:03.644 ************************************ 00:36:03.644 END TEST nvmf_dif 00:36:03.644 ************************************ 00:36:03.644 01:18:33 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:03.644 01:18:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:03.644 01:18:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:03.644 01:18:33 -- common/autotest_common.sh@10 -- # set +x 00:36:03.644 ************************************ 00:36:03.644 START TEST nvmf_abort_qd_sizes 00:36:03.644 ************************************ 00:36:03.644 01:18:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:03.644 * Looking for test storage... 00:36:03.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:03.644 01:18:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:03.644 01:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:03.644 01:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:03.644 01:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:03.644 01:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:03.644 01:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:03.644 01:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:03.644 01:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:03.644 01:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:03.644 01:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:03.644 01:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:03.904 01:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:03.904 01:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:03.904 01:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:03.904 01:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:03.904 01:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:03.904 01:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:03.904 01:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:03.904 01:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:03.904 01:18:34 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:03.904 01:18:34 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:03.904 01:18:34 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:03.904 01:18:34 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:03.904 01:18:34 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:03.904 01:18:34 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:03.904 01:18:34 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:03.904 01:18:34 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:03.904 01:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:36:03.904 01:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:03.904 01:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:03.904 01:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:03.904 01:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:03.904 01:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:03.904 01:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:03.904 01:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:03.904 01:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:03.904 01:18:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:03.904 01:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:03.904 01:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:03.904 01:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:03.904 01:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:03.904 01:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:03.904 01:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:03.904 01:18:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:03.904 01:18:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:03.904 01:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:03.904 01:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:03.904 01:18:34 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:36:03.904 01:18:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:05.804 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:05.804 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:36:05.804 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:05.804 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:05.804 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:05.804 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:05.804 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:05.804 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:36:05.804 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:05.804 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:36:05.804 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:36:05.804 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:36:05.804 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:36:05.804 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:36:05.804 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:36:05.804 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:05.804 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:05.804 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:05.805 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:05.805 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:05.805 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:05.805 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:05.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:05.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:36:05.805 00:36:05.805 --- 10.0.0.2 ping statistics --- 00:36:05.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:05.805 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:05.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:05.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:36:05.805 00:36:05.805 --- 10.0.0.1 ping statistics --- 00:36:05.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:05.805 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:36:05.805 01:18:36 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:07.182 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:07.182 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:07.182 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:07.182 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:07.182 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:07.182 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:07.182 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:07.182 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:07.182 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:07.182 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:07.182 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:07.182 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:07.182 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:07.182 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:07.182 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:07.182 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:08.121 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:08.121 01:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:08.121 01:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:08.121 01:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:08.121 01:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:08.121 01:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:08.121 01:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:08.121 01:18:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:08.121 01:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:08.121 01:18:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:08.121 01:18:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:08.121 01:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2004818 00:36:08.121 01:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:08.121 01:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2004818 00:36:08.121 01:18:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 2004818 ']' 00:36:08.121 01:18:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:08.121 01:18:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:08.121 01:18:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:08.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:08.121 01:18:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:08.121 01:18:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:08.121 [2024-07-26 01:18:38.498091] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:36:08.121 [2024-07-26 01:18:38.498158] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:08.121 EAL: No free 2048 kB hugepages reported on node 1 00:36:08.380 [2024-07-26 01:18:38.566932] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:08.380 [2024-07-26 01:18:38.666338] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:08.380 [2024-07-26 01:18:38.666406] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:08.380 [2024-07-26 01:18:38.666423] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:08.380 [2024-07-26 01:18:38.666437] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:08.380 [2024-07-26 01:18:38.666449] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:08.380 [2024-07-26 01:18:38.669083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:08.380 [2024-07-26 01:18:38.669134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:08.380 [2024-07-26 01:18:38.669219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:36:08.380 [2024-07-26 01:18:38.669222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:08.380 01:18:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:08.380 01:18:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:36:08.380 01:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:08.380 01:18:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:08.380 01:18:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:08.638 01:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:08.638 01:18:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:08.638 01:18:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:08.638 01:18:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:08.638 01:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:36:08.638 01:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:36:08.638 01:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:36:08.638 01:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:08.638 01:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:36:08.638 01:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:36:08.638 01:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:36:08.639 01:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:36:08.639 01:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:36:08.639 01:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:36:08.639 01:18:38 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:36:08.639 01:18:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:08.639 01:18:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:36:08.639 01:18:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:08.639 01:18:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:08.639 01:18:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:08.639 01:18:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:08.639 ************************************ 00:36:08.639 START TEST spdk_target_abort 00:36:08.639 ************************************ 00:36:08.639 01:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:36:08.639 01:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:08.639 01:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:36:08.639 01:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.639 01:18:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:11.926 spdk_targetn1 00:36:11.926 01:18:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.926 01:18:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:11.926 01:18:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.926 01:18:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:11.926 [2024-07-26 01:18:41.683832] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:11.926 01:18:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.926 01:18:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:11.926 01:18:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.926 01:18:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:11.926 01:18:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.926 01:18:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:11.926 01:18:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.926 01:18:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:11.926 01:18:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.926 01:18:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:11.926 01:18:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.926 01:18:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:11.926 [2024-07-26 01:18:41.716036] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:11.926 01:18:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.926 01:18:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:11.926 01:18:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:11.926 01:18:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:11.926 01:18:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:11.926 01:18:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:11.926 01:18:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:11.926 01:18:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:11.926 01:18:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:11.926 01:18:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:11.926 01:18:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:11.926 01:18:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:11.926 01:18:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:11.926 01:18:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:11.926 01:18:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:11.926 01:18:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:11.926 01:18:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:11.926 01:18:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:11.926 01:18:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:11.926 01:18:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:11.926 01:18:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:11.926 01:18:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:11.926 EAL: No free 2048 kB hugepages reported on node 1 00:36:14.478 Initializing NVMe Controllers 00:36:14.478 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:14.478 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:14.478 Initialization complete. Launching workers. 00:36:14.478 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9485, failed: 0 00:36:14.478 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1369, failed to submit 8116 00:36:14.478 success 733, unsuccess 636, failed 0 00:36:14.478 01:18:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:14.478 01:18:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:14.478 EAL: No free 2048 kB hugepages reported on node 1 00:36:17.756 Initializing NVMe Controllers 00:36:17.756 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:17.756 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:17.756 Initialization complete. Launching workers. 00:36:17.756 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8672, failed: 0 00:36:17.756 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1233, failed to submit 7439 00:36:17.756 success 305, unsuccess 928, failed 0 00:36:17.756 01:18:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:17.756 01:18:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:17.756 EAL: No free 2048 kB hugepages reported on node 1 00:36:21.042 Initializing NVMe Controllers 00:36:21.042 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:21.042 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:21.042 Initialization complete. Launching workers. 00:36:21.042 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 28957, failed: 0 00:36:21.042 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2765, failed to submit 26192 00:36:21.042 success 445, unsuccess 2320, failed 0 00:36:21.042 01:18:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:21.042 01:18:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:21.042 01:18:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:21.042 01:18:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:21.042 01:18:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:21.042 01:18:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:21.042 01:18:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:22.418 01:18:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:22.418 01:18:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2004818 00:36:22.418 01:18:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 2004818 ']' 00:36:22.418 01:18:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 2004818 00:36:22.418 01:18:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:36:22.418 01:18:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:22.418 01:18:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2004818 00:36:22.418 01:18:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:22.418 01:18:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:22.418 01:18:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2004818' 00:36:22.418 killing process with pid 2004818 00:36:22.418 01:18:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 2004818 00:36:22.418 01:18:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 2004818 00:36:22.678 00:36:22.678 real 0m14.073s 00:36:22.678 user 0m51.932s 00:36:22.678 sys 0m3.080s 00:36:22.678 01:18:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:22.678 01:18:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:22.678 ************************************ 00:36:22.678 END TEST spdk_target_abort 00:36:22.678 ************************************ 00:36:22.678 01:18:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:22.678 01:18:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:22.678 01:18:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:22.678 01:18:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:22.678 ************************************ 00:36:22.678 START TEST kernel_target_abort 00:36:22.678 ************************************ 00:36:22.678 01:18:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:36:22.678 01:18:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:22.678 01:18:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:36:22.678 01:18:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:22.678 01:18:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:22.678 01:18:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:22.678 01:18:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:22.678 01:18:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:22.678 01:18:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:22.678 01:18:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:22.678 01:18:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:22.678 01:18:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:22.678 01:18:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:22.679 01:18:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:22.679 01:18:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:22.679 01:18:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:22.679 01:18:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:22.679 01:18:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:22.679 01:18:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:36:22.679 01:18:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:22.679 01:18:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:22.679 01:18:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:22.679 01:18:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:23.613 Waiting for block devices as requested 00:36:23.613 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:23.871 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:23.871 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:24.129 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:24.129 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:24.129 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:24.129 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:24.388 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:24.388 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:24.388 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:24.388 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:24.647 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:24.647 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:24.647 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:24.647 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:24.905 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:24.905 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:24.905 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:24.905 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:24.905 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:24.905 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:36:24.905 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:24.905 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:36:24.905 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:24.905 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:24.905 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:25.164 No valid GPT data, bailing 00:36:25.164 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:25.164 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:36:25.164 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:36:25.164 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:25.164 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:36:25.164 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:25.164 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:25.164 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:25.164 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:25.164 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:36:25.164 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:36:25.164 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:36:25.164 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:36:25.164 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:36:25.164 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:36:25.164 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:36:25.164 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:25.164 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:36:25.164 00:36:25.164 Discovery Log Number of Records 2, Generation counter 2 00:36:25.164 =====Discovery Log Entry 0====== 00:36:25.164 trtype: tcp 00:36:25.164 adrfam: ipv4 00:36:25.164 subtype: current discovery subsystem 00:36:25.164 treq: not specified, sq flow control disable supported 00:36:25.164 portid: 1 00:36:25.164 trsvcid: 4420 00:36:25.164 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:25.164 traddr: 10.0.0.1 00:36:25.164 eflags: none 00:36:25.164 sectype: none 00:36:25.164 =====Discovery Log Entry 1====== 00:36:25.164 trtype: tcp 00:36:25.164 adrfam: ipv4 00:36:25.164 subtype: nvme subsystem 00:36:25.164 treq: not specified, sq flow control disable supported 00:36:25.164 portid: 1 00:36:25.164 trsvcid: 4420 00:36:25.164 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:25.164 traddr: 10.0.0.1 00:36:25.164 eflags: none 00:36:25.164 sectype: none 00:36:25.164 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:25.164 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:25.164 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:25.164 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:25.164 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:25.164 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:25.164 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:25.164 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:25.164 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:25.164 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:25.164 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:25.164 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:25.164 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:25.164 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:25.165 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:25.165 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:25.165 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:25.165 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:25.165 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:25.165 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:25.165 01:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:25.165 EAL: No free 2048 kB hugepages reported on node 1 00:36:28.453 Initializing NVMe Controllers 00:36:28.453 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:28.453 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:28.453 Initialization complete. Launching workers. 00:36:28.453 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37437, failed: 0 00:36:28.453 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37437, failed to submit 0 00:36:28.453 success 0, unsuccess 37437, failed 0 00:36:28.453 01:18:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:28.453 01:18:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:28.453 EAL: No free 2048 kB hugepages reported on node 1 00:36:31.742 Initializing NVMe Controllers 00:36:31.742 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:31.742 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:31.742 Initialization complete. Launching workers. 00:36:31.742 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 71375, failed: 0 00:36:31.742 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17998, failed to submit 53377 00:36:31.742 success 0, unsuccess 17998, failed 0 00:36:31.742 01:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:31.742 01:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:31.742 EAL: No free 2048 kB hugepages reported on node 1 00:36:35.030 Initializing NVMe Controllers 00:36:35.030 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:35.030 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:35.030 Initialization complete. Launching workers. 00:36:35.030 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 69489, failed: 0 00:36:35.030 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17358, failed to submit 52131 00:36:35.030 success 0, unsuccess 17358, failed 0 00:36:35.030 01:19:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:35.030 01:19:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:35.030 01:19:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:36:35.030 01:19:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:35.030 01:19:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:35.030 01:19:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:35.030 01:19:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:35.030 01:19:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:35.030 01:19:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:35.030 01:19:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:35.597 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:35.597 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:35.597 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:35.597 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:35.597 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:35.597 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:35.597 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:35.597 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:35.857 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:35.857 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:35.857 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:35.857 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:35.857 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:35.857 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:35.857 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:35.857 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:36.794 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:36.794 00:36:36.794 real 0m14.150s 00:36:36.794 user 0m5.708s 00:36:36.794 sys 0m3.181s 00:36:36.794 01:19:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:36.794 01:19:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:36.794 ************************************ 00:36:36.794 END TEST kernel_target_abort 00:36:36.794 ************************************ 00:36:36.794 01:19:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:36.794 01:19:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:36.794 01:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:36.794 01:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:36:36.794 01:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:36.794 01:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:36:36.794 01:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:36.794 01:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:36.794 rmmod nvme_tcp 00:36:36.794 rmmod nvme_fabrics 00:36:36.794 rmmod nvme_keyring 00:36:36.794 01:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:36.794 01:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:36:36.794 01:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:36:36.794 01:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2004818 ']' 00:36:36.794 01:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2004818 00:36:36.794 01:19:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 2004818 ']' 00:36:36.794 01:19:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 2004818 00:36:36.794 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2004818) - No such process 00:36:36.794 01:19:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 2004818 is not found' 00:36:36.794 Process with pid 2004818 is not found 00:36:36.794 01:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:36.794 01:19:07 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:37.771 Waiting for block devices as requested 00:36:38.031 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:38.031 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:38.031 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:38.291 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:38.291 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:38.291 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:38.291 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:38.551 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:38.551 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:38.551 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:38.551 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:38.810 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:38.810 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:38.810 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:38.810 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:39.068 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:39.068 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:39.068 01:19:09 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:39.068 01:19:09 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:39.068 01:19:09 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:39.068 01:19:09 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:39.068 01:19:09 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:39.068 01:19:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:39.068 01:19:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:41.603 01:19:11 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:41.603 00:36:41.603 real 0m37.512s 00:36:41.603 user 0m59.697s 00:36:41.603 sys 0m9.535s 00:36:41.603 01:19:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:41.603 01:19:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:41.603 ************************************ 00:36:41.603 END TEST nvmf_abort_qd_sizes 00:36:41.603 ************************************ 00:36:41.603 01:19:11 -- spdk/autotest.sh@299 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:41.603 01:19:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:41.603 01:19:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:41.603 01:19:11 -- common/autotest_common.sh@10 -- # set +x 00:36:41.603 ************************************ 00:36:41.603 START TEST keyring_file 00:36:41.603 ************************************ 00:36:41.603 01:19:11 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:41.603 * Looking for test storage... 00:36:41.603 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:41.603 01:19:11 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:41.603 01:19:11 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:41.603 01:19:11 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:41.603 01:19:11 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:41.603 01:19:11 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:41.603 01:19:11 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:41.603 01:19:11 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:41.603 01:19:11 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:41.603 01:19:11 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:41.603 01:19:11 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:41.603 01:19:11 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:41.603 01:19:11 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:41.603 01:19:11 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:41.603 01:19:11 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:41.603 01:19:11 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:41.603 01:19:11 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:41.603 01:19:11 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:41.603 01:19:11 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:41.603 01:19:11 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:41.603 01:19:11 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:41.603 01:19:11 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:41.603 01:19:11 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:41.603 01:19:11 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:41.603 01:19:11 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.603 01:19:11 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.603 01:19:11 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.603 01:19:11 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:41.603 01:19:11 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.603 01:19:11 keyring_file -- nvmf/common.sh@47 -- # : 0 00:36:41.603 01:19:11 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:41.603 01:19:11 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:41.603 01:19:11 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:41.603 01:19:11 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:41.603 01:19:11 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:41.603 01:19:11 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:41.603 01:19:11 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:41.603 01:19:11 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:41.603 01:19:11 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:41.603 01:19:11 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:41.603 01:19:11 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:41.603 01:19:11 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:41.603 01:19:11 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:41.603 01:19:11 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:41.603 01:19:11 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:41.603 01:19:11 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:41.604 01:19:11 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:41.604 01:19:11 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:41.604 01:19:11 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:41.604 01:19:11 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:41.604 01:19:11 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.xdMK9qzZ1M 00:36:41.604 01:19:11 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:41.604 01:19:11 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:41.604 01:19:11 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:41.604 01:19:11 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:41.604 01:19:11 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:41.604 01:19:11 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:41.604 01:19:11 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:41.604 01:19:11 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.xdMK9qzZ1M 00:36:41.604 01:19:11 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.xdMK9qzZ1M 00:36:41.604 01:19:11 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.xdMK9qzZ1M 00:36:41.604 01:19:11 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:41.604 01:19:11 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:41.604 01:19:11 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:41.604 01:19:11 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:41.604 01:19:11 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:41.604 01:19:11 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:41.604 01:19:11 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.nVFJHnSLgz 00:36:41.604 01:19:11 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:41.604 01:19:11 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:41.604 01:19:11 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:41.604 01:19:11 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:41.604 01:19:11 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:41.604 01:19:11 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:41.604 01:19:11 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:41.604 01:19:11 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.nVFJHnSLgz 00:36:41.604 01:19:11 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.nVFJHnSLgz 00:36:41.604 01:19:11 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.nVFJHnSLgz 00:36:41.604 01:19:11 keyring_file -- keyring/file.sh@30 -- # tgtpid=2010573 00:36:41.604 01:19:11 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:41.604 01:19:11 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2010573 00:36:41.604 01:19:11 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2010573 ']' 00:36:41.604 01:19:11 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:41.604 01:19:11 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:41.604 01:19:11 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:41.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:41.604 01:19:11 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:41.604 01:19:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:41.604 [2024-07-26 01:19:11.773109] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:36:41.604 [2024-07-26 01:19:11.773199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2010573 ] 00:36:41.604 EAL: No free 2048 kB hugepages reported on node 1 00:36:41.604 [2024-07-26 01:19:11.832420] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:41.604 [2024-07-26 01:19:11.922262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:41.863 01:19:12 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:41.864 01:19:12 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:36:41.864 01:19:12 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:41.864 01:19:12 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.864 01:19:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:41.864 [2024-07-26 01:19:12.164151] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:41.864 null0 00:36:41.864 [2024-07-26 01:19:12.196186] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:41.864 [2024-07-26 01:19:12.196679] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:41.864 [2024-07-26 01:19:12.204186] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:36:41.864 01:19:12 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.864 01:19:12 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:41.864 01:19:12 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:36:41.864 01:19:12 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:41.864 01:19:12 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:41.864 01:19:12 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:41.864 01:19:12 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:41.864 01:19:12 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:41.864 01:19:12 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:41.864 01:19:12 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.864 01:19:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:41.864 [2024-07-26 01:19:12.212198] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:41.864 request: 00:36:41.864 { 00:36:41.864 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:41.864 "secure_channel": false, 00:36:41.864 "listen_address": { 00:36:41.864 "trtype": "tcp", 00:36:41.864 "traddr": "127.0.0.1", 00:36:41.864 "trsvcid": "4420" 00:36:41.864 }, 00:36:41.864 "method": "nvmf_subsystem_add_listener", 00:36:41.864 "req_id": 1 00:36:41.864 } 00:36:41.864 Got JSON-RPC error response 00:36:41.864 response: 00:36:41.864 { 00:36:41.864 "code": -32602, 00:36:41.864 "message": "Invalid parameters" 00:36:41.864 } 00:36:41.864 01:19:12 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:41.864 01:19:12 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:36:41.864 01:19:12 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:41.864 01:19:12 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:41.864 01:19:12 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:41.864 01:19:12 keyring_file -- keyring/file.sh@46 -- # bperfpid=2010577 00:36:41.864 01:19:12 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:41.864 01:19:12 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2010577 /var/tmp/bperf.sock 00:36:41.864 01:19:12 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2010577 ']' 00:36:41.864 01:19:12 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:41.864 01:19:12 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:41.864 01:19:12 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:41.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:41.864 01:19:12 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:41.864 01:19:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:41.864 [2024-07-26 01:19:12.258542] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:36:41.864 [2024-07-26 01:19:12.258606] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2010577 ] 00:36:41.864 EAL: No free 2048 kB hugepages reported on node 1 00:36:42.122 [2024-07-26 01:19:12.319930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:42.122 [2024-07-26 01:19:12.410793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:42.122 01:19:12 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:42.122 01:19:12 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:36:42.122 01:19:12 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.xdMK9qzZ1M 00:36:42.122 01:19:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.xdMK9qzZ1M 00:36:42.380 01:19:12 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.nVFJHnSLgz 00:36:42.380 01:19:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.nVFJHnSLgz 00:36:42.638 01:19:13 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:36:42.638 01:19:13 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:36:42.638 01:19:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:42.638 01:19:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:42.638 01:19:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:42.896 01:19:13 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.xdMK9qzZ1M == \/\t\m\p\/\t\m\p\.\x\d\M\K\9\q\z\Z\1\M ]] 00:36:42.896 01:19:13 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:36:42.896 01:19:13 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:42.896 01:19:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:42.896 01:19:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:42.896 01:19:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:43.158 01:19:13 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.nVFJHnSLgz == \/\t\m\p\/\t\m\p\.\n\V\F\J\H\n\S\L\g\z ]] 00:36:43.158 01:19:13 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:36:43.158 01:19:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:43.158 01:19:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:43.158 01:19:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:43.158 01:19:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:43.158 01:19:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:43.415 01:19:13 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:36:43.415 01:19:13 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:36:43.415 01:19:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:43.416 01:19:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:43.416 01:19:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:43.416 01:19:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:43.416 01:19:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:43.672 01:19:14 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:43.672 01:19:14 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:43.672 01:19:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:43.930 [2024-07-26 01:19:14.269513] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:43.930 nvme0n1 00:36:44.188 01:19:14 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:36:44.188 01:19:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:44.188 01:19:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:44.188 01:19:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:44.188 01:19:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:44.188 01:19:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:44.188 01:19:14 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:36:44.188 01:19:14 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:36:44.188 01:19:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:44.188 01:19:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:44.188 01:19:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:44.188 01:19:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:44.188 01:19:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:44.445 01:19:14 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:36:44.445 01:19:14 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:44.704 Running I/O for 1 seconds... 00:36:45.640 00:36:45.640 Latency(us) 00:36:45.640 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:45.640 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:45.640 nvme0n1 : 1.02 6034.84 23.57 0.00 0.00 20997.86 4393.34 23107.51 00:36:45.640 =================================================================================================================== 00:36:45.640 Total : 6034.84 23.57 0.00 0.00 20997.86 4393.34 23107.51 00:36:45.640 0 00:36:45.640 01:19:15 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:45.640 01:19:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:45.897 01:19:16 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:36:45.897 01:19:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:45.897 01:19:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:45.897 01:19:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:45.897 01:19:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:45.897 01:19:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:46.155 01:19:16 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:36:46.155 01:19:16 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:36:46.155 01:19:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:46.155 01:19:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:46.155 01:19:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:46.155 01:19:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:46.155 01:19:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:46.412 01:19:16 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:46.412 01:19:16 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:46.412 01:19:16 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:36:46.412 01:19:16 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:46.412 01:19:16 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:36:46.412 01:19:16 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:46.412 01:19:16 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:36:46.412 01:19:16 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:46.412 01:19:16 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:46.412 01:19:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:46.670 [2024-07-26 01:19:17.008737] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:46.670 [2024-07-26 01:19:17.009261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda3710 (107): Transport endpoint is not connected 00:36:46.670 [2024-07-26 01:19:17.010254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda3710 (9): Bad file descriptor 00:36:46.670 [2024-07-26 01:19:17.011252] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:46.670 [2024-07-26 01:19:17.011272] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:46.670 [2024-07-26 01:19:17.011286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:46.670 request: 00:36:46.670 { 00:36:46.670 "name": "nvme0", 00:36:46.670 "trtype": "tcp", 00:36:46.670 "traddr": "127.0.0.1", 00:36:46.670 "adrfam": "ipv4", 00:36:46.670 "trsvcid": "4420", 00:36:46.670 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:46.670 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:46.670 "prchk_reftag": false, 00:36:46.670 "prchk_guard": false, 00:36:46.670 "hdgst": false, 00:36:46.670 "ddgst": false, 00:36:46.670 "psk": "key1", 00:36:46.670 "method": "bdev_nvme_attach_controller", 00:36:46.670 "req_id": 1 00:36:46.670 } 00:36:46.670 Got JSON-RPC error response 00:36:46.670 response: 00:36:46.670 { 00:36:46.670 "code": -5, 00:36:46.670 "message": "Input/output error" 00:36:46.670 } 00:36:46.670 01:19:17 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:36:46.670 01:19:17 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:46.670 01:19:17 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:46.670 01:19:17 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:46.670 01:19:17 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:36:46.670 01:19:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:46.670 01:19:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:46.670 01:19:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:46.670 01:19:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:46.670 01:19:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:46.927 01:19:17 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:36:46.927 01:19:17 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:36:46.927 01:19:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:46.927 01:19:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:46.927 01:19:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:46.927 01:19:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:46.927 01:19:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:47.184 01:19:17 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:47.184 01:19:17 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:36:47.184 01:19:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:47.441 01:19:17 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:36:47.441 01:19:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:47.699 01:19:18 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:36:47.699 01:19:18 keyring_file -- keyring/file.sh@77 -- # jq length 00:36:47.699 01:19:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:47.957 01:19:18 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:36:47.957 01:19:18 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.xdMK9qzZ1M 00:36:47.957 01:19:18 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.xdMK9qzZ1M 00:36:47.957 01:19:18 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:36:47.957 01:19:18 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.xdMK9qzZ1M 00:36:47.957 01:19:18 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:36:47.957 01:19:18 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:47.957 01:19:18 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:36:47.957 01:19:18 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:47.957 01:19:18 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.xdMK9qzZ1M 00:36:47.957 01:19:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.xdMK9qzZ1M 00:36:48.215 [2024-07-26 01:19:18.514156] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.xdMK9qzZ1M': 0100660 00:36:48.215 [2024-07-26 01:19:18.514194] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:48.215 request: 00:36:48.215 { 00:36:48.215 "name": "key0", 00:36:48.215 "path": "/tmp/tmp.xdMK9qzZ1M", 00:36:48.215 "method": "keyring_file_add_key", 00:36:48.215 "req_id": 1 00:36:48.215 } 00:36:48.215 Got JSON-RPC error response 00:36:48.215 response: 00:36:48.215 { 00:36:48.215 "code": -1, 00:36:48.215 "message": "Operation not permitted" 00:36:48.215 } 00:36:48.215 01:19:18 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:36:48.215 01:19:18 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:48.215 01:19:18 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:48.215 01:19:18 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:48.215 01:19:18 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.xdMK9qzZ1M 00:36:48.215 01:19:18 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.xdMK9qzZ1M 00:36:48.215 01:19:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.xdMK9qzZ1M 00:36:48.473 01:19:18 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.xdMK9qzZ1M 00:36:48.473 01:19:18 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:36:48.473 01:19:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:48.473 01:19:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:48.473 01:19:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:48.473 01:19:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:48.473 01:19:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:48.730 01:19:19 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:36:48.730 01:19:19 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:48.730 01:19:19 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:36:48.730 01:19:19 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:48.731 01:19:19 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:36:48.731 01:19:19 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:48.731 01:19:19 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:36:48.731 01:19:19 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:48.731 01:19:19 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:48.731 01:19:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:48.988 [2024-07-26 01:19:19.256201] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.xdMK9qzZ1M': No such file or directory 00:36:48.988 [2024-07-26 01:19:19.256233] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:48.988 [2024-07-26 01:19:19.256268] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:48.988 [2024-07-26 01:19:19.256279] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:48.988 [2024-07-26 01:19:19.256291] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:48.988 request: 00:36:48.988 { 00:36:48.988 "name": "nvme0", 00:36:48.988 "trtype": "tcp", 00:36:48.988 "traddr": "127.0.0.1", 00:36:48.988 "adrfam": "ipv4", 00:36:48.988 "trsvcid": "4420", 00:36:48.988 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:48.988 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:48.988 "prchk_reftag": false, 00:36:48.988 "prchk_guard": false, 00:36:48.988 "hdgst": false, 00:36:48.988 "ddgst": false, 00:36:48.988 "psk": "key0", 00:36:48.988 "method": "bdev_nvme_attach_controller", 00:36:48.988 "req_id": 1 00:36:48.988 } 00:36:48.988 Got JSON-RPC error response 00:36:48.988 response: 00:36:48.988 { 00:36:48.988 "code": -19, 00:36:48.988 "message": "No such device" 00:36:48.988 } 00:36:48.988 01:19:19 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:36:48.988 01:19:19 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:48.988 01:19:19 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:48.988 01:19:19 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:48.988 01:19:19 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:36:48.988 01:19:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:49.245 01:19:19 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:49.245 01:19:19 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:49.245 01:19:19 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:49.245 01:19:19 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:49.245 01:19:19 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:49.245 01:19:19 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:49.245 01:19:19 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.81G1vEX41r 00:36:49.245 01:19:19 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:49.245 01:19:19 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:49.245 01:19:19 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:49.245 01:19:19 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:49.245 01:19:19 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:49.245 01:19:19 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:49.245 01:19:19 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:49.245 01:19:19 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.81G1vEX41r 00:36:49.245 01:19:19 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.81G1vEX41r 00:36:49.246 01:19:19 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.81G1vEX41r 00:36:49.246 01:19:19 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.81G1vEX41r 00:36:49.246 01:19:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.81G1vEX41r 00:36:49.503 01:19:19 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:49.503 01:19:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:49.760 nvme0n1 00:36:49.760 01:19:20 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:36:49.760 01:19:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:49.760 01:19:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:49.760 01:19:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:49.760 01:19:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:49.760 01:19:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:50.018 01:19:20 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:36:50.018 01:19:20 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:36:50.018 01:19:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:50.277 01:19:20 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:36:50.277 01:19:20 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:36:50.277 01:19:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:50.277 01:19:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:50.277 01:19:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:50.535 01:19:20 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:36:50.535 01:19:20 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:36:50.535 01:19:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:50.535 01:19:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:50.535 01:19:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:50.535 01:19:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:50.535 01:19:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:50.793 01:19:21 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:36:50.793 01:19:21 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:50.793 01:19:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:51.051 01:19:21 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:36:51.051 01:19:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:51.051 01:19:21 keyring_file -- keyring/file.sh@104 -- # jq length 00:36:51.309 01:19:21 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:36:51.309 01:19:21 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.81G1vEX41r 00:36:51.309 01:19:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.81G1vEX41r 00:36:51.567 01:19:21 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.nVFJHnSLgz 00:36:51.567 01:19:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.nVFJHnSLgz 00:36:51.825 01:19:22 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:51.825 01:19:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:52.083 nvme0n1 00:36:52.083 01:19:22 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:36:52.083 01:19:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:52.341 01:19:22 keyring_file -- keyring/file.sh@112 -- # config='{ 00:36:52.341 "subsystems": [ 00:36:52.341 { 00:36:52.341 "subsystem": "keyring", 00:36:52.341 "config": [ 00:36:52.341 { 00:36:52.341 "method": "keyring_file_add_key", 00:36:52.341 "params": { 00:36:52.341 "name": "key0", 00:36:52.341 "path": "/tmp/tmp.81G1vEX41r" 00:36:52.341 } 00:36:52.341 }, 00:36:52.341 { 00:36:52.341 "method": "keyring_file_add_key", 00:36:52.341 "params": { 00:36:52.341 "name": "key1", 00:36:52.341 "path": "/tmp/tmp.nVFJHnSLgz" 00:36:52.341 } 00:36:52.341 } 00:36:52.341 ] 00:36:52.341 }, 00:36:52.341 { 00:36:52.341 "subsystem": "iobuf", 00:36:52.341 "config": [ 00:36:52.341 { 00:36:52.341 "method": "iobuf_set_options", 00:36:52.341 "params": { 00:36:52.341 "small_pool_count": 8192, 00:36:52.341 "large_pool_count": 1024, 00:36:52.341 "small_bufsize": 8192, 00:36:52.341 "large_bufsize": 135168 00:36:52.341 } 00:36:52.341 } 00:36:52.341 ] 00:36:52.341 }, 00:36:52.341 { 00:36:52.341 "subsystem": "sock", 00:36:52.341 "config": [ 00:36:52.341 { 00:36:52.341 "method": "sock_set_default_impl", 00:36:52.341 "params": { 00:36:52.341 "impl_name": "posix" 00:36:52.341 } 00:36:52.341 }, 00:36:52.341 { 00:36:52.341 "method": "sock_impl_set_options", 00:36:52.341 "params": { 00:36:52.341 "impl_name": "ssl", 00:36:52.341 "recv_buf_size": 4096, 00:36:52.341 "send_buf_size": 4096, 00:36:52.341 "enable_recv_pipe": true, 00:36:52.341 "enable_quickack": false, 00:36:52.341 "enable_placement_id": 0, 00:36:52.341 "enable_zerocopy_send_server": true, 00:36:52.341 "enable_zerocopy_send_client": false, 00:36:52.341 "zerocopy_threshold": 0, 00:36:52.341 "tls_version": 0, 00:36:52.341 "enable_ktls": false 00:36:52.341 } 00:36:52.341 }, 00:36:52.341 { 00:36:52.341 "method": "sock_impl_set_options", 00:36:52.341 "params": { 00:36:52.341 "impl_name": "posix", 00:36:52.341 "recv_buf_size": 2097152, 00:36:52.341 "send_buf_size": 2097152, 00:36:52.341 "enable_recv_pipe": true, 00:36:52.341 "enable_quickack": false, 00:36:52.341 "enable_placement_id": 0, 00:36:52.341 "enable_zerocopy_send_server": true, 00:36:52.341 "enable_zerocopy_send_client": false, 00:36:52.341 "zerocopy_threshold": 0, 00:36:52.341 "tls_version": 0, 00:36:52.341 "enable_ktls": false 00:36:52.341 } 00:36:52.341 } 00:36:52.341 ] 00:36:52.341 }, 00:36:52.341 { 00:36:52.341 "subsystem": "vmd", 00:36:52.341 "config": [] 00:36:52.341 }, 00:36:52.341 { 00:36:52.341 "subsystem": "accel", 00:36:52.341 "config": [ 00:36:52.341 { 00:36:52.341 "method": "accel_set_options", 00:36:52.341 "params": { 00:36:52.341 "small_cache_size": 128, 00:36:52.341 "large_cache_size": 16, 00:36:52.341 "task_count": 2048, 00:36:52.341 "sequence_count": 2048, 00:36:52.341 "buf_count": 2048 00:36:52.341 } 00:36:52.341 } 00:36:52.341 ] 00:36:52.341 }, 00:36:52.341 { 00:36:52.341 "subsystem": "bdev", 00:36:52.341 "config": [ 00:36:52.341 { 00:36:52.341 "method": "bdev_set_options", 00:36:52.341 "params": { 00:36:52.341 "bdev_io_pool_size": 65535, 00:36:52.341 "bdev_io_cache_size": 256, 00:36:52.341 "bdev_auto_examine": true, 00:36:52.341 "iobuf_small_cache_size": 128, 00:36:52.341 "iobuf_large_cache_size": 16 00:36:52.341 } 00:36:52.341 }, 00:36:52.341 { 00:36:52.341 "method": "bdev_raid_set_options", 00:36:52.341 "params": { 00:36:52.341 "process_window_size_kb": 1024, 00:36:52.341 "process_max_bandwidth_mb_sec": 0 00:36:52.341 } 00:36:52.341 }, 00:36:52.341 { 00:36:52.342 "method": "bdev_iscsi_set_options", 00:36:52.342 "params": { 00:36:52.342 "timeout_sec": 30 00:36:52.342 } 00:36:52.342 }, 00:36:52.342 { 00:36:52.342 "method": "bdev_nvme_set_options", 00:36:52.342 "params": { 00:36:52.342 "action_on_timeout": "none", 00:36:52.342 "timeout_us": 0, 00:36:52.342 "timeout_admin_us": 0, 00:36:52.342 "keep_alive_timeout_ms": 10000, 00:36:52.342 "arbitration_burst": 0, 00:36:52.342 "low_priority_weight": 0, 00:36:52.342 "medium_priority_weight": 0, 00:36:52.342 "high_priority_weight": 0, 00:36:52.342 "nvme_adminq_poll_period_us": 10000, 00:36:52.342 "nvme_ioq_poll_period_us": 0, 00:36:52.342 "io_queue_requests": 512, 00:36:52.342 "delay_cmd_submit": true, 00:36:52.342 "transport_retry_count": 4, 00:36:52.342 "bdev_retry_count": 3, 00:36:52.342 "transport_ack_timeout": 0, 00:36:52.342 "ctrlr_loss_timeout_sec": 0, 00:36:52.342 "reconnect_delay_sec": 0, 00:36:52.342 "fast_io_fail_timeout_sec": 0, 00:36:52.342 "disable_auto_failback": false, 00:36:52.342 "generate_uuids": false, 00:36:52.342 "transport_tos": 0, 00:36:52.342 "nvme_error_stat": false, 00:36:52.342 "rdma_srq_size": 0, 00:36:52.342 "io_path_stat": false, 00:36:52.342 "allow_accel_sequence": false, 00:36:52.342 "rdma_max_cq_size": 0, 00:36:52.342 "rdma_cm_event_timeout_ms": 0, 00:36:52.342 "dhchap_digests": [ 00:36:52.342 "sha256", 00:36:52.342 "sha384", 00:36:52.342 "sha512" 00:36:52.342 ], 00:36:52.342 "dhchap_dhgroups": [ 00:36:52.342 "null", 00:36:52.342 "ffdhe2048", 00:36:52.342 "ffdhe3072", 00:36:52.342 "ffdhe4096", 00:36:52.342 "ffdhe6144", 00:36:52.342 "ffdhe8192" 00:36:52.342 ] 00:36:52.342 } 00:36:52.342 }, 00:36:52.342 { 00:36:52.342 "method": "bdev_nvme_attach_controller", 00:36:52.342 "params": { 00:36:52.342 "name": "nvme0", 00:36:52.342 "trtype": "TCP", 00:36:52.342 "adrfam": "IPv4", 00:36:52.342 "traddr": "127.0.0.1", 00:36:52.342 "trsvcid": "4420", 00:36:52.342 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:52.342 "prchk_reftag": false, 00:36:52.342 "prchk_guard": false, 00:36:52.342 "ctrlr_loss_timeout_sec": 0, 00:36:52.342 "reconnect_delay_sec": 0, 00:36:52.342 "fast_io_fail_timeout_sec": 0, 00:36:52.342 "psk": "key0", 00:36:52.342 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:52.342 "hdgst": false, 00:36:52.342 "ddgst": false 00:36:52.342 } 00:36:52.342 }, 00:36:52.342 { 00:36:52.342 "method": "bdev_nvme_set_hotplug", 00:36:52.342 "params": { 00:36:52.342 "period_us": 100000, 00:36:52.342 "enable": false 00:36:52.342 } 00:36:52.342 }, 00:36:52.342 { 00:36:52.342 "method": "bdev_wait_for_examine" 00:36:52.342 } 00:36:52.342 ] 00:36:52.342 }, 00:36:52.342 { 00:36:52.342 "subsystem": "nbd", 00:36:52.342 "config": [] 00:36:52.342 } 00:36:52.342 ] 00:36:52.342 }' 00:36:52.342 01:19:22 keyring_file -- keyring/file.sh@114 -- # killprocess 2010577 00:36:52.342 01:19:22 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2010577 ']' 00:36:52.342 01:19:22 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2010577 00:36:52.342 01:19:22 keyring_file -- common/autotest_common.sh@955 -- # uname 00:36:52.342 01:19:22 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:52.342 01:19:22 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2010577 00:36:52.342 01:19:22 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:52.342 01:19:22 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:52.342 01:19:22 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2010577' 00:36:52.342 killing process with pid 2010577 00:36:52.342 01:19:22 keyring_file -- common/autotest_common.sh@969 -- # kill 2010577 00:36:52.342 Received shutdown signal, test time was about 1.000000 seconds 00:36:52.342 00:36:52.342 Latency(us) 00:36:52.342 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:52.342 =================================================================================================================== 00:36:52.342 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:52.342 01:19:22 keyring_file -- common/autotest_common.sh@974 -- # wait 2010577 00:36:52.600 01:19:22 keyring_file -- keyring/file.sh@117 -- # bperfpid=2011984 00:36:52.600 01:19:22 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2011984 /var/tmp/bperf.sock 00:36:52.600 01:19:22 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2011984 ']' 00:36:52.600 01:19:22 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:52.600 01:19:22 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:52.600 01:19:22 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:52.600 01:19:22 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:52.600 01:19:22 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:36:52.600 "subsystems": [ 00:36:52.600 { 00:36:52.601 "subsystem": "keyring", 00:36:52.601 "config": [ 00:36:52.601 { 00:36:52.601 "method": "keyring_file_add_key", 00:36:52.601 "params": { 00:36:52.601 "name": "key0", 00:36:52.601 "path": "/tmp/tmp.81G1vEX41r" 00:36:52.601 } 00:36:52.601 }, 00:36:52.601 { 00:36:52.601 "method": "keyring_file_add_key", 00:36:52.601 "params": { 00:36:52.601 "name": "key1", 00:36:52.601 "path": "/tmp/tmp.nVFJHnSLgz" 00:36:52.601 } 00:36:52.601 } 00:36:52.601 ] 00:36:52.601 }, 00:36:52.601 { 00:36:52.601 "subsystem": "iobuf", 00:36:52.601 "config": [ 00:36:52.601 { 00:36:52.601 "method": "iobuf_set_options", 00:36:52.601 "params": { 00:36:52.601 "small_pool_count": 8192, 00:36:52.601 "large_pool_count": 1024, 00:36:52.601 "small_bufsize": 8192, 00:36:52.601 "large_bufsize": 135168 00:36:52.601 } 00:36:52.601 } 00:36:52.601 ] 00:36:52.601 }, 00:36:52.601 { 00:36:52.601 "subsystem": "sock", 00:36:52.601 "config": [ 00:36:52.601 { 00:36:52.601 "method": "sock_set_default_impl", 00:36:52.601 "params": { 00:36:52.601 "impl_name": "posix" 00:36:52.601 } 00:36:52.601 }, 00:36:52.601 { 00:36:52.601 "method": "sock_impl_set_options", 00:36:52.601 "params": { 00:36:52.601 "impl_name": "ssl", 00:36:52.601 "recv_buf_size": 4096, 00:36:52.601 "send_buf_size": 4096, 00:36:52.601 "enable_recv_pipe": true, 00:36:52.601 "enable_quickack": false, 00:36:52.601 "enable_placement_id": 0, 00:36:52.601 "enable_zerocopy_send_server": true, 00:36:52.601 "enable_zerocopy_send_client": false, 00:36:52.601 "zerocopy_threshold": 0, 00:36:52.601 "tls_version": 0, 00:36:52.601 "enable_ktls": false 00:36:52.601 } 00:36:52.601 }, 00:36:52.601 { 00:36:52.601 "method": "sock_impl_set_options", 00:36:52.601 "params": { 00:36:52.601 "impl_name": "posix", 00:36:52.601 "recv_buf_size": 2097152, 00:36:52.601 "send_buf_size": 2097152, 00:36:52.601 "enable_recv_pipe": true, 00:36:52.601 "enable_quickack": false, 00:36:52.601 "enable_placement_id": 0, 00:36:52.601 "enable_zerocopy_send_server": true, 00:36:52.601 "enable_zerocopy_send_client": false, 00:36:52.601 "zerocopy_threshold": 0, 00:36:52.601 "tls_version": 0, 00:36:52.601 "enable_ktls": false 00:36:52.601 } 00:36:52.601 } 00:36:52.601 ] 00:36:52.601 }, 00:36:52.601 { 00:36:52.601 "subsystem": "vmd", 00:36:52.601 "config": [] 00:36:52.601 }, 00:36:52.601 { 00:36:52.601 "subsystem": "accel", 00:36:52.601 "config": [ 00:36:52.601 { 00:36:52.601 "method": "accel_set_options", 00:36:52.601 "params": { 00:36:52.601 "small_cache_size": 128, 00:36:52.601 "large_cache_size": 16, 00:36:52.601 "task_count": 2048, 00:36:52.601 "sequence_count": 2048, 00:36:52.601 "buf_count": 2048 00:36:52.601 } 00:36:52.601 } 00:36:52.601 ] 00:36:52.601 }, 00:36:52.601 { 00:36:52.601 "subsystem": "bdev", 00:36:52.601 "config": [ 00:36:52.601 { 00:36:52.601 "method": "bdev_set_options", 00:36:52.601 "params": { 00:36:52.601 "bdev_io_pool_size": 65535, 00:36:52.601 "bdev_io_cache_size": 256, 00:36:52.601 "bdev_auto_examine": true, 00:36:52.601 "iobuf_small_cache_size": 128, 00:36:52.601 "iobuf_large_cache_size": 16 00:36:52.601 } 00:36:52.601 }, 00:36:52.601 { 00:36:52.601 "method": "bdev_raid_set_options", 00:36:52.601 "params": { 00:36:52.601 "process_window_size_kb": 1024, 00:36:52.601 "process_max_bandwidth_mb_sec": 0 00:36:52.601 } 00:36:52.601 }, 00:36:52.601 { 00:36:52.601 "method": "bdev_iscsi_set_options", 00:36:52.601 "params": { 00:36:52.601 "timeout_sec": 30 00:36:52.601 } 00:36:52.601 }, 00:36:52.601 { 00:36:52.601 "method": "bdev_nvme_set_options", 00:36:52.601 "params": { 00:36:52.601 "action_on_timeout": "none", 00:36:52.601 "timeout_us": 0, 00:36:52.601 "timeout_admin_us": 0, 00:36:52.601 "keep_alive_timeout_ms": 10000, 00:36:52.601 "arbitration_burst": 0, 00:36:52.601 "low_priority_weight": 0, 00:36:52.601 "medium_priority_weight": 0, 00:36:52.601 "high_priority_weight": 0, 00:36:52.601 "nvme_adminq_poll_period_us": 10000, 00:36:52.601 "nvme_ioq_poll_period_us": 0, 00:36:52.601 "io_queue_requests": 512, 00:36:52.601 "delay_cmd_submit": true, 00:36:52.601 "transport_retry_count": 4, 00:36:52.601 "bdev_retry_count": 3, 00:36:52.601 "transport_ack_timeout": 0, 00:36:52.601 "ctrlr_loss_timeout_sec": 0, 00:36:52.601 "reconnect_delay_sec": 0, 00:36:52.601 "fast_io_fail_timeout_sec": 0, 00:36:52.601 "disable_auto_failback": false, 00:36:52.601 "generate_uuids": false, 00:36:52.601 "transport_tos": 0, 00:36:52.601 "nvme_error_stat": false, 00:36:52.601 "rdma_srq_size": 0, 00:36:52.601 "io_path_stat": false, 00:36:52.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:52.601 "allow_accel_sequence": false, 00:36:52.601 "rdma_max_cq_size": 0, 00:36:52.601 "rdma_cm_event_timeout_ms": 0, 00:36:52.601 "dhchap_digests": [ 00:36:52.601 "sha256", 00:36:52.601 "sha384", 00:36:52.601 "sha512" 00:36:52.601 ], 00:36:52.601 "dhchap_dhgroups": [ 00:36:52.601 "null", 00:36:52.601 "ffdhe2048", 00:36:52.601 "ffdhe3072", 00:36:52.601 "ffdhe4096", 00:36:52.601 "ffdhe6144", 00:36:52.601 "ffdhe8192" 00:36:52.601 ] 00:36:52.601 } 00:36:52.601 }, 00:36:52.601 { 00:36:52.601 "method": "bdev_nvme_attach_controller", 00:36:52.601 "params": { 00:36:52.601 "name": "nvme0", 00:36:52.601 "trtype": "TCP", 00:36:52.601 "adrfam": "IPv4", 00:36:52.601 "traddr": "127.0.0.1", 00:36:52.601 "trsvcid": "4420", 00:36:52.601 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:52.601 "prchk_reftag": false, 00:36:52.601 "prchk_guard": false, 00:36:52.601 "ctrlr_loss_timeout_sec": 0, 00:36:52.601 "reconnect_delay_sec": 0, 00:36:52.601 "fast_io_fail_timeout_sec": 0, 00:36:52.601 "psk": "key0", 00:36:52.601 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:52.601 "hdgst": false, 00:36:52.601 "ddgst": false 00:36:52.601 } 00:36:52.601 }, 00:36:52.601 { 00:36:52.601 "method": "bdev_nvme_set_hotplug", 00:36:52.601 "params": { 00:36:52.601 "period_us": 100000, 00:36:52.601 "enable": false 00:36:52.601 } 00:36:52.601 }, 00:36:52.601 { 00:36:52.601 "method": "bdev_wait_for_examine" 00:36:52.601 } 00:36:52.601 ] 00:36:52.601 }, 00:36:52.601 { 00:36:52.601 "subsystem": "nbd", 00:36:52.601 "config": [] 00:36:52.601 } 00:36:52.601 ] 00:36:52.601 }' 00:36:52.601 01:19:22 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:52.601 01:19:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:52.601 [2024-07-26 01:19:23.023126] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:36:52.601 [2024-07-26 01:19:23.023211] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2011984 ] 00:36:52.861 EAL: No free 2048 kB hugepages reported on node 1 00:36:52.862 [2024-07-26 01:19:23.082052] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:52.862 [2024-07-26 01:19:23.167680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:53.129 [2024-07-26 01:19:23.353786] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:53.701 01:19:23 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:53.701 01:19:23 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:36:53.701 01:19:23 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:36:53.701 01:19:23 keyring_file -- keyring/file.sh@120 -- # jq length 00:36:53.701 01:19:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:53.958 01:19:24 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:36:53.958 01:19:24 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:36:53.958 01:19:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:53.958 01:19:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:53.958 01:19:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:53.958 01:19:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:53.958 01:19:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:54.215 01:19:24 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:54.215 01:19:24 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:36:54.215 01:19:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:54.215 01:19:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:54.215 01:19:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:54.215 01:19:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:54.215 01:19:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:54.472 01:19:24 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:36:54.472 01:19:24 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:36:54.472 01:19:24 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:36:54.472 01:19:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:54.730 01:19:24 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:36:54.730 01:19:24 keyring_file -- keyring/file.sh@1 -- # cleanup 00:36:54.730 01:19:24 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.81G1vEX41r /tmp/tmp.nVFJHnSLgz 00:36:54.730 01:19:24 keyring_file -- keyring/file.sh@20 -- # killprocess 2011984 00:36:54.730 01:19:24 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2011984 ']' 00:36:54.730 01:19:24 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2011984 00:36:54.730 01:19:25 keyring_file -- common/autotest_common.sh@955 -- # uname 00:36:54.730 01:19:25 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:54.730 01:19:25 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2011984 00:36:54.730 01:19:25 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:54.730 01:19:25 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:54.730 01:19:25 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2011984' 00:36:54.730 killing process with pid 2011984 00:36:54.730 01:19:25 keyring_file -- common/autotest_common.sh@969 -- # kill 2011984 00:36:54.730 Received shutdown signal, test time was about 1.000000 seconds 00:36:54.730 00:36:54.730 Latency(us) 00:36:54.731 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:54.731 =================================================================================================================== 00:36:54.731 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:54.731 01:19:25 keyring_file -- common/autotest_common.sh@974 -- # wait 2011984 00:36:54.988 01:19:25 keyring_file -- keyring/file.sh@21 -- # killprocess 2010573 00:36:54.988 01:19:25 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2010573 ']' 00:36:54.988 01:19:25 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2010573 00:36:54.988 01:19:25 keyring_file -- common/autotest_common.sh@955 -- # uname 00:36:54.988 01:19:25 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:54.988 01:19:25 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2010573 00:36:54.988 01:19:25 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:54.988 01:19:25 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:54.988 01:19:25 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2010573' 00:36:54.988 killing process with pid 2010573 00:36:54.988 01:19:25 keyring_file -- common/autotest_common.sh@969 -- # kill 2010573 00:36:54.988 [2024-07-26 01:19:25.261543] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:36:54.988 01:19:25 keyring_file -- common/autotest_common.sh@974 -- # wait 2010573 00:36:55.247 00:36:55.247 real 0m14.056s 00:36:55.247 user 0m35.095s 00:36:55.247 sys 0m3.312s 00:36:55.247 01:19:25 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:55.247 01:19:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:55.247 ************************************ 00:36:55.247 END TEST keyring_file 00:36:55.247 ************************************ 00:36:55.247 01:19:25 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:36:55.247 01:19:25 -- spdk/autotest.sh@301 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:55.247 01:19:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:55.247 01:19:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:55.247 01:19:25 -- common/autotest_common.sh@10 -- # set +x 00:36:55.505 ************************************ 00:36:55.505 START TEST keyring_linux 00:36:55.505 ************************************ 00:36:55.505 01:19:25 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:55.505 * Looking for test storage... 00:36:55.505 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:55.505 01:19:25 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:55.505 01:19:25 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:55.505 01:19:25 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:36:55.505 01:19:25 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:55.505 01:19:25 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:55.506 01:19:25 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:55.506 01:19:25 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:55.506 01:19:25 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:55.506 01:19:25 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:55.506 01:19:25 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:55.506 01:19:25 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:55.506 01:19:25 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:55.506 01:19:25 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:55.506 01:19:25 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:55.506 01:19:25 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:55.506 01:19:25 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:55.506 01:19:25 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:55.506 01:19:25 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:55.506 01:19:25 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:55.506 01:19:25 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:55.506 01:19:25 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:55.506 01:19:25 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:55.506 01:19:25 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:55.506 01:19:25 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:55.506 01:19:25 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:55.506 01:19:25 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:55.506 01:19:25 keyring_linux -- paths/export.sh@5 -- # export PATH 00:36:55.506 01:19:25 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:55.506 01:19:25 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:36:55.506 01:19:25 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:55.506 01:19:25 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:55.506 01:19:25 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:55.506 01:19:25 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:55.506 01:19:25 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:55.506 01:19:25 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:55.506 01:19:25 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:55.506 01:19:25 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:55.506 01:19:25 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:55.506 01:19:25 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:55.506 01:19:25 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:55.506 01:19:25 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:36:55.506 01:19:25 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:36:55.506 01:19:25 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:36:55.506 01:19:25 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:36:55.506 01:19:25 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:55.506 01:19:25 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:36:55.506 01:19:25 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:55.506 01:19:25 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:55.506 01:19:25 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:36:55.506 01:19:25 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:55.506 01:19:25 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:55.506 01:19:25 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:36:55.506 01:19:25 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:55.506 01:19:25 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:55.506 01:19:25 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:36:55.506 01:19:25 keyring_linux -- nvmf/common.sh@705 -- # python - 00:36:55.506 01:19:25 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:36:55.506 01:19:25 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:36:55.506 /tmp/:spdk-test:key0 00:36:55.506 01:19:25 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:36:55.506 01:19:25 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:55.506 01:19:25 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:36:55.506 01:19:25 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:55.506 01:19:25 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:55.506 01:19:25 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:36:55.506 01:19:25 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:55.506 01:19:25 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:55.506 01:19:25 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:36:55.506 01:19:25 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:55.506 01:19:25 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:55.506 01:19:25 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:36:55.506 01:19:25 keyring_linux -- nvmf/common.sh@705 -- # python - 00:36:55.506 01:19:25 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:36:55.506 01:19:25 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:36:55.506 /tmp/:spdk-test:key1 00:36:55.506 01:19:25 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2012391 00:36:55.506 01:19:25 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:55.506 01:19:25 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2012391 00:36:55.506 01:19:25 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 2012391 ']' 00:36:55.506 01:19:25 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:55.506 01:19:25 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:55.506 01:19:25 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:55.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:55.506 01:19:25 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:55.506 01:19:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:55.506 [2024-07-26 01:19:25.869333] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:36:55.506 [2024-07-26 01:19:25.869454] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2012391 ] 00:36:55.506 EAL: No free 2048 kB hugepages reported on node 1 00:36:55.506 [2024-07-26 01:19:25.924804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:55.766 [2024-07-26 01:19:26.013790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:56.024 01:19:26 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:56.024 01:19:26 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:36:56.025 01:19:26 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:36:56.025 01:19:26 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.025 01:19:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:56.025 [2024-07-26 01:19:26.267323] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:56.025 null0 00:36:56.025 [2024-07-26 01:19:26.299410] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:56.025 [2024-07-26 01:19:26.299893] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:56.025 01:19:26 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.025 01:19:26 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:36:56.025 367712914 00:36:56.025 01:19:26 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:36:56.025 226217331 00:36:56.025 01:19:26 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2012420 00:36:56.025 01:19:26 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:36:56.025 01:19:26 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2012420 /var/tmp/bperf.sock 00:36:56.025 01:19:26 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 2012420 ']' 00:36:56.025 01:19:26 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:56.025 01:19:26 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:56.025 01:19:26 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:56.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:56.025 01:19:26 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:56.025 01:19:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:56.025 [2024-07-26 01:19:26.367890] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:36:56.025 [2024-07-26 01:19:26.367967] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2012420 ] 00:36:56.025 EAL: No free 2048 kB hugepages reported on node 1 00:36:56.025 [2024-07-26 01:19:26.434967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:56.283 [2024-07-26 01:19:26.527353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:56.283 01:19:26 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:56.283 01:19:26 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:36:56.283 01:19:26 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:36:56.283 01:19:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:36:56.541 01:19:26 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:36:56.541 01:19:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:56.799 01:19:27 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:56.799 01:19:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:57.058 [2024-07-26 01:19:27.395805] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:57.058 nvme0n1 00:36:57.058 01:19:27 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:36:57.058 01:19:27 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:36:57.058 01:19:27 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:57.316 01:19:27 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:57.316 01:19:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:57.316 01:19:27 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:57.316 01:19:27 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:36:57.316 01:19:27 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:57.316 01:19:27 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:36:57.316 01:19:27 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:36:57.316 01:19:27 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:57.316 01:19:27 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:36:57.316 01:19:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:57.574 01:19:27 keyring_linux -- keyring/linux.sh@25 -- # sn=367712914 00:36:57.574 01:19:27 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:36:57.574 01:19:27 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:57.574 01:19:27 keyring_linux -- keyring/linux.sh@26 -- # [[ 367712914 == \3\6\7\7\1\2\9\1\4 ]] 00:36:57.574 01:19:27 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 367712914 00:36:57.574 01:19:27 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:36:57.574 01:19:27 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:57.834 Running I/O for 1 seconds... 00:36:58.770 00:36:58.770 Latency(us) 00:36:58.770 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:58.770 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:58.770 nvme0n1 : 1.01 6460.30 25.24 0.00 0.00 19678.61 10485.76 31263.10 00:36:58.770 =================================================================================================================== 00:36:58.770 Total : 6460.30 25.24 0.00 0.00 19678.61 10485.76 31263.10 00:36:58.770 0 00:36:58.770 01:19:29 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:58.770 01:19:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:59.028 01:19:29 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:36:59.028 01:19:29 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:36:59.028 01:19:29 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:59.028 01:19:29 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:59.028 01:19:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:59.028 01:19:29 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:59.285 01:19:29 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:36:59.285 01:19:29 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:59.285 01:19:29 keyring_linux -- keyring/linux.sh@23 -- # return 00:36:59.285 01:19:29 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:59.285 01:19:29 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:36:59.285 01:19:29 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:59.285 01:19:29 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:36:59.285 01:19:29 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:59.285 01:19:29 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:36:59.285 01:19:29 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:59.285 01:19:29 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:59.285 01:19:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:59.543 [2024-07-26 01:19:29.875464] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:59.543 [2024-07-26 01:19:29.875515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22157f0 (107): Transport endpoint is not connected 00:36:59.543 [2024-07-26 01:19:29.876509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22157f0 (9): Bad file descriptor 00:36:59.543 [2024-07-26 01:19:29.877508] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:59.543 [2024-07-26 01:19:29.877528] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:59.543 [2024-07-26 01:19:29.877552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:59.543 request: 00:36:59.543 { 00:36:59.543 "name": "nvme0", 00:36:59.543 "trtype": "tcp", 00:36:59.543 "traddr": "127.0.0.1", 00:36:59.543 "adrfam": "ipv4", 00:36:59.543 "trsvcid": "4420", 00:36:59.543 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:59.543 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:59.543 "prchk_reftag": false, 00:36:59.543 "prchk_guard": false, 00:36:59.543 "hdgst": false, 00:36:59.543 "ddgst": false, 00:36:59.543 "psk": ":spdk-test:key1", 00:36:59.543 "method": "bdev_nvme_attach_controller", 00:36:59.543 "req_id": 1 00:36:59.543 } 00:36:59.543 Got JSON-RPC error response 00:36:59.543 response: 00:36:59.543 { 00:36:59.543 "code": -5, 00:36:59.543 "message": "Input/output error" 00:36:59.543 } 00:36:59.543 01:19:29 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:36:59.543 01:19:29 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:59.543 01:19:29 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:59.543 01:19:29 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:59.543 01:19:29 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:59.543 01:19:29 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:59.543 01:19:29 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:59.543 01:19:29 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:59.543 01:19:29 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:59.543 01:19:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:59.543 01:19:29 keyring_linux -- keyring/linux.sh@33 -- # sn=367712914 00:36:59.543 01:19:29 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 367712914 00:36:59.543 1 links removed 00:36:59.543 01:19:29 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:59.543 01:19:29 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:59.543 01:19:29 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:59.543 01:19:29 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:59.543 01:19:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:59.543 01:19:29 keyring_linux -- keyring/linux.sh@33 -- # sn=226217331 00:36:59.543 01:19:29 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 226217331 00:36:59.543 1 links removed 00:36:59.543 01:19:29 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2012420 00:36:59.543 01:19:29 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 2012420 ']' 00:36:59.543 01:19:29 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 2012420 00:36:59.543 01:19:29 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:36:59.543 01:19:29 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:59.543 01:19:29 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2012420 00:36:59.543 01:19:29 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:59.543 01:19:29 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:59.543 01:19:29 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2012420' 00:36:59.543 killing process with pid 2012420 00:36:59.543 01:19:29 keyring_linux -- common/autotest_common.sh@969 -- # kill 2012420 00:36:59.543 Received shutdown signal, test time was about 1.000000 seconds 00:36:59.543 00:36:59.543 Latency(us) 00:36:59.543 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:59.543 =================================================================================================================== 00:36:59.543 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:59.543 01:19:29 keyring_linux -- common/autotest_common.sh@974 -- # wait 2012420 00:36:59.800 01:19:30 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2012391 00:36:59.800 01:19:30 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 2012391 ']' 00:36:59.800 01:19:30 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 2012391 00:36:59.800 01:19:30 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:36:59.800 01:19:30 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:59.800 01:19:30 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2012391 00:36:59.800 01:19:30 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:59.800 01:19:30 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:59.800 01:19:30 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2012391' 00:36:59.800 killing process with pid 2012391 00:36:59.801 01:19:30 keyring_linux -- common/autotest_common.sh@969 -- # kill 2012391 00:36:59.801 01:19:30 keyring_linux -- common/autotest_common.sh@974 -- # wait 2012391 00:37:00.368 00:37:00.368 real 0m4.860s 00:37:00.368 user 0m9.175s 00:37:00.368 sys 0m1.653s 00:37:00.368 01:19:30 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:00.368 01:19:30 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:00.368 ************************************ 00:37:00.368 END TEST keyring_linux 00:37:00.368 ************************************ 00:37:00.368 01:19:30 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:37:00.368 01:19:30 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:37:00.368 01:19:30 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:37:00.368 01:19:30 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:37:00.368 01:19:30 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:37:00.368 01:19:30 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:37:00.368 01:19:30 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:37:00.368 01:19:30 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:37:00.368 01:19:30 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:37:00.368 01:19:30 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:37:00.368 01:19:30 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:37:00.368 01:19:30 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:37:00.368 01:19:30 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:37:00.368 01:19:30 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:37:00.368 01:19:30 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:37:00.368 01:19:30 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:37:00.369 01:19:30 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:37:00.369 01:19:30 -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:00.369 01:19:30 -- common/autotest_common.sh@10 -- # set +x 00:37:00.369 01:19:30 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:37:00.369 01:19:30 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:37:00.369 01:19:30 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:37:00.369 01:19:30 -- common/autotest_common.sh@10 -- # set +x 00:37:01.745 INFO: APP EXITING 00:37:01.745 INFO: killing all VMs 00:37:01.745 INFO: killing vhost app 00:37:01.745 INFO: EXIT DONE 00:37:03.122 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:37:03.122 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:37:03.122 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:37:03.122 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:37:03.122 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:37:03.122 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:37:03.122 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:37:03.122 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:37:03.122 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:37:03.122 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:37:03.123 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:37:03.123 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:37:03.123 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:37:03.123 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:37:03.123 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:37:03.123 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:37:03.123 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:37:04.494 Cleaning 00:37:04.494 Removing: /var/run/dpdk/spdk0/config 00:37:04.494 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:04.494 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:04.494 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:04.494 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:04.494 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:04.494 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:04.494 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:04.494 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:04.494 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:04.494 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:04.494 Removing: /var/run/dpdk/spdk1/config 00:37:04.494 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:04.494 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:04.494 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:04.494 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:04.494 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:04.494 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:04.494 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:04.494 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:04.494 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:04.494 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:04.494 Removing: /var/run/dpdk/spdk1/mp_socket 00:37:04.494 Removing: /var/run/dpdk/spdk2/config 00:37:04.494 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:04.494 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:04.494 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:04.494 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:04.494 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:04.494 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:04.494 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:04.494 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:04.494 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:04.494 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:04.494 Removing: /var/run/dpdk/spdk3/config 00:37:04.494 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:04.494 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:04.494 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:04.494 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:04.494 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:04.494 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:04.494 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:04.494 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:04.494 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:04.494 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:04.494 Removing: /var/run/dpdk/spdk4/config 00:37:04.494 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:04.494 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:04.494 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:04.494 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:04.494 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:04.494 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:04.495 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:04.495 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:04.495 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:04.495 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:04.495 Removing: /dev/shm/bdev_svc_trace.1 00:37:04.495 Removing: /dev/shm/nvmf_trace.0 00:37:04.495 Removing: /dev/shm/spdk_tgt_trace.pid1696816 00:37:04.495 Removing: /var/run/dpdk/spdk0 00:37:04.495 Removing: /var/run/dpdk/spdk1 00:37:04.495 Removing: /var/run/dpdk/spdk2 00:37:04.495 Removing: /var/run/dpdk/spdk3 00:37:04.495 Removing: /var/run/dpdk/spdk4 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1695267 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1696001 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1696816 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1697253 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1697945 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1698085 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1698793 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1698811 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1699052 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1700367 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1701284 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1701545 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1701782 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1701985 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1702174 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1702337 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1702489 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1702673 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1702988 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1705342 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1705544 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1705787 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1705795 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1706109 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1706234 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1706540 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1706665 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1706835 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1706851 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1707107 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1707139 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1707513 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1707666 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1707951 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1709943 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1712547 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1719396 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1719806 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1722310 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1722475 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1725177 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1729393 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1731500 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1737770 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1742993 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1744277 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1744970 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1755083 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1757416 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1811126 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1814403 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1818228 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1822169 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1822171 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1823222 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1823873 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1824527 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1824926 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1824935 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1825191 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1825203 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1825263 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1825864 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1826520 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1827173 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1827536 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1827582 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1827718 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1828600 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1829314 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1834626 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1859887 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1862667 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1863844 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1865160 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1865173 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1865312 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1865451 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1865775 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1867079 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1867803 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1868109 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1869718 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1870138 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1870811 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1873702 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1876955 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1880373 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1903995 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1906645 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1910526 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1911381 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1912453 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1915136 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1917376 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1921455 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1921575 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1924275 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1924473 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1924612 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1924880 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1924886 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1925958 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1927132 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1928308 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1929498 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1930672 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1932069 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1936267 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1936613 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1938014 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1938745 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1942455 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1944305 00:37:04.495 Removing: /var/run/dpdk/spdk_pid1947719 00:37:04.753 Removing: /var/run/dpdk/spdk_pid1951032 00:37:04.753 Removing: /var/run/dpdk/spdk_pid1957245 00:37:04.753 Removing: /var/run/dpdk/spdk_pid1961592 00:37:04.753 Removing: /var/run/dpdk/spdk_pid1961594 00:37:04.753 Removing: /var/run/dpdk/spdk_pid1974400 00:37:04.753 Removing: /var/run/dpdk/spdk_pid1974806 00:37:04.753 Removing: /var/run/dpdk/spdk_pid1975285 00:37:04.753 Removing: /var/run/dpdk/spdk_pid1975739 00:37:04.753 Removing: /var/run/dpdk/spdk_pid1976311 00:37:04.753 Removing: /var/run/dpdk/spdk_pid1976724 00:37:04.753 Removing: /var/run/dpdk/spdk_pid1977128 00:37:04.753 Removing: /var/run/dpdk/spdk_pid1977540 00:37:04.753 Removing: /var/run/dpdk/spdk_pid1979909 00:37:04.753 Removing: /var/run/dpdk/spdk_pid1980174 00:37:04.753 Removing: /var/run/dpdk/spdk_pid1983935 00:37:04.753 Removing: /var/run/dpdk/spdk_pid1984005 00:37:04.753 Removing: /var/run/dpdk/spdk_pid1985609 00:37:04.753 Removing: /var/run/dpdk/spdk_pid1990651 00:37:04.753 Removing: /var/run/dpdk/spdk_pid1990656 00:37:04.753 Removing: /var/run/dpdk/spdk_pid1993428 00:37:04.753 Removing: /var/run/dpdk/spdk_pid1994818 00:37:04.753 Removing: /var/run/dpdk/spdk_pid1996219 00:37:04.753 Removing: /var/run/dpdk/spdk_pid1997103 00:37:04.753 Removing: /var/run/dpdk/spdk_pid1999098 00:37:04.753 Removing: /var/run/dpdk/spdk_pid1999975 00:37:04.753 Removing: /var/run/dpdk/spdk_pid2005210 00:37:04.753 Removing: /var/run/dpdk/spdk_pid2005506 00:37:04.753 Removing: /var/run/dpdk/spdk_pid2005897 00:37:04.753 Removing: /var/run/dpdk/spdk_pid2007456 00:37:04.753 Removing: /var/run/dpdk/spdk_pid2007734 00:37:04.753 Removing: /var/run/dpdk/spdk_pid2008131 00:37:04.753 Removing: /var/run/dpdk/spdk_pid2010573 00:37:04.753 Removing: /var/run/dpdk/spdk_pid2010577 00:37:04.753 Removing: /var/run/dpdk/spdk_pid2011984 00:37:04.753 Removing: /var/run/dpdk/spdk_pid2012391 00:37:04.753 Removing: /var/run/dpdk/spdk_pid2012420 00:37:04.753 Clean 00:37:04.753 01:19:35 -- common/autotest_common.sh@1451 -- # return 0 00:37:04.753 01:19:35 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:37:04.753 01:19:35 -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:04.753 01:19:35 -- common/autotest_common.sh@10 -- # set +x 00:37:04.753 01:19:35 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:37:04.753 01:19:35 -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:04.753 01:19:35 -- common/autotest_common.sh@10 -- # set +x 00:37:04.753 01:19:35 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:04.753 01:19:35 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:04.753 01:19:35 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:04.753 01:19:35 -- spdk/autotest.sh@395 -- # hash lcov 00:37:04.753 01:19:35 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:37:04.753 01:19:35 -- spdk/autotest.sh@397 -- # hostname 00:37:04.753 01:19:35 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:05.011 geninfo: WARNING: invalid characters removed from testname! 00:37:37.116 01:20:02 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:37.116 01:20:06 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:39.649 01:20:09 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:42.171 01:20:12 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:45.452 01:20:15 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:47.980 01:20:18 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:51.263 01:20:21 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:51.263 01:20:21 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:51.263 01:20:21 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:37:51.263 01:20:21 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:51.263 01:20:21 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:51.263 01:20:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.263 01:20:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.263 01:20:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.263 01:20:21 -- paths/export.sh@5 -- $ export PATH 00:37:51.263 01:20:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.263 01:20:21 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:37:51.263 01:20:21 -- common/autobuild_common.sh@447 -- $ date +%s 00:37:51.263 01:20:21 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721949621.XXXXXX 00:37:51.263 01:20:21 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721949621.fYFOYQ 00:37:51.263 01:20:21 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:37:51.263 01:20:21 -- common/autobuild_common.sh@453 -- $ '[' -n v22.11.4 ']' 00:37:51.263 01:20:21 -- common/autobuild_common.sh@454 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:37:51.263 01:20:21 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:37:51.263 01:20:21 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:37:51.263 01:20:21 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:37:51.263 01:20:21 -- common/autobuild_common.sh@463 -- $ get_config_params 00:37:51.263 01:20:21 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:37:51.263 01:20:21 -- common/autotest_common.sh@10 -- $ set +x 00:37:51.263 01:20:21 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:37:51.263 01:20:21 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:37:51.263 01:20:21 -- pm/common@17 -- $ local monitor 00:37:51.263 01:20:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:51.263 01:20:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:51.263 01:20:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:51.263 01:20:21 -- pm/common@21 -- $ date +%s 00:37:51.263 01:20:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:51.263 01:20:21 -- pm/common@21 -- $ date +%s 00:37:51.263 01:20:21 -- pm/common@25 -- $ sleep 1 00:37:51.263 01:20:21 -- pm/common@21 -- $ date +%s 00:37:51.263 01:20:21 -- pm/common@21 -- $ date +%s 00:37:51.263 01:20:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721949621 00:37:51.263 01:20:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721949621 00:37:51.263 01:20:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721949621 00:37:51.263 01:20:21 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721949621 00:37:51.263 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721949621_collect-vmstat.pm.log 00:37:51.263 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721949621_collect-cpu-load.pm.log 00:37:51.263 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721949621_collect-cpu-temp.pm.log 00:37:51.263 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721949621_collect-bmc-pm.bmc.pm.log 00:37:52.198 01:20:22 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:37:52.198 01:20:22 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:37:52.198 01:20:22 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:52.198 01:20:22 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:37:52.198 01:20:22 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:37:52.198 01:20:22 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:37:52.198 01:20:22 -- spdk/autopackage.sh@19 -- $ timing_finish 00:37:52.198 01:20:22 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:52.198 01:20:22 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:37:52.198 01:20:22 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:52.198 01:20:22 -- spdk/autopackage.sh@20 -- $ exit 0 00:37:52.198 01:20:22 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:37:52.198 01:20:22 -- pm/common@29 -- $ signal_monitor_resources TERM 00:37:52.198 01:20:22 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:37:52.198 01:20:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:52.198 01:20:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:37:52.198 01:20:22 -- pm/common@44 -- $ pid=2023506 00:37:52.198 01:20:22 -- pm/common@50 -- $ kill -TERM 2023506 00:37:52.198 01:20:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:52.198 01:20:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:37:52.198 01:20:22 -- pm/common@44 -- $ pid=2023508 00:37:52.198 01:20:22 -- pm/common@50 -- $ kill -TERM 2023508 00:37:52.198 01:20:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:52.198 01:20:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:37:52.198 01:20:22 -- pm/common@44 -- $ pid=2023510 00:37:52.198 01:20:22 -- pm/common@50 -- $ kill -TERM 2023510 00:37:52.198 01:20:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:52.198 01:20:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:37:52.198 01:20:22 -- pm/common@44 -- $ pid=2023538 00:37:52.198 01:20:22 -- pm/common@50 -- $ sudo -E kill -TERM 2023538 00:37:52.198 + [[ -n 1590488 ]] 00:37:52.198 + sudo kill 1590488 00:37:52.209 [Pipeline] } 00:37:52.228 [Pipeline] // stage 00:37:52.234 [Pipeline] } 00:37:52.252 [Pipeline] // timeout 00:37:52.257 [Pipeline] } 00:37:52.275 [Pipeline] // catchError 00:37:52.281 [Pipeline] } 00:37:52.300 [Pipeline] // wrap 00:37:52.306 [Pipeline] } 00:37:52.323 [Pipeline] // catchError 00:37:52.333 [Pipeline] stage 00:37:52.336 [Pipeline] { (Epilogue) 00:37:52.351 [Pipeline] catchError 00:37:52.353 [Pipeline] { 00:37:52.369 [Pipeline] echo 00:37:52.371 Cleanup processes 00:37:52.378 [Pipeline] sh 00:37:52.666 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:52.666 2023658 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:37:52.666 2023772 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:52.681 [Pipeline] sh 00:37:52.968 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:52.968 ++ grep -v 'sudo pgrep' 00:37:52.968 ++ awk '{print $1}' 00:37:52.968 + sudo kill -9 2023658 00:37:52.981 [Pipeline] sh 00:37:53.271 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:03.274 [Pipeline] sh 00:38:03.557 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:03.557 Artifacts sizes are good 00:38:03.570 [Pipeline] archiveArtifacts 00:38:03.575 Archiving artifacts 00:38:03.796 [Pipeline] sh 00:38:04.079 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:04.095 [Pipeline] cleanWs 00:38:04.105 [WS-CLEANUP] Deleting project workspace... 00:38:04.105 [WS-CLEANUP] Deferred wipeout is used... 00:38:04.112 [WS-CLEANUP] done 00:38:04.114 [Pipeline] } 00:38:04.134 [Pipeline] // catchError 00:38:04.146 [Pipeline] sh 00:38:04.428 + logger -p user.info -t JENKINS-CI 00:38:04.437 [Pipeline] } 00:38:04.453 [Pipeline] // stage 00:38:04.459 [Pipeline] } 00:38:04.476 [Pipeline] // node 00:38:04.481 [Pipeline] End of Pipeline 00:38:04.521 Finished: SUCCESS